[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ceph-users
Subject:    [ceph-users] Re: All older OSDs corrupted after Quincy upgrade
From:       Hector Martin <marcan () marcan ! st>
Date:       2022-06-29 10:38:48
Message-ID: 77f6117d-ceaf-c22b-6617-d4b0b3468574 () marcan ! st
[Download RAW message or body]

On 29/06/2022 17.51, Stefan Kooman wrote:
> 
> What is the setting of "bluestore_fsck_quick_fix_on_mount" in your 
> cluster / OSDs?

I don't have it set explicitly. `ceph config` says:

# ceph config get osd bluestore_fsck_quick_fix_on_mount
false

> What does a "ceph-kvstore-tool bluestore-kv /var/lib/ceph/osd/ceph-$id 
> get S per_pool_omap" give?

2 on the recently deployed OSD, 1 on the others.

> FWIW: we have had this config setting since some version of Luminous 
> (when this became a better option than stupid). Recently upgraded to 
> Octopus. Redeployed every OSD with these settings in Octopus. No issues. 
> But we have never done a bdev expansion though.

Yeah, it also applies to the new OSDs and those are fine. So if it's
related, there must be some other trigger factor too.

> If it's possible to export objects you might recover data ... but not 
> sure if that data would not be corrupted. With EC it first has to be 
> reassembled. Might be possible, but not an easy task.

Basically, if it's going to take more than two days of work to get the
data back (at least inasmuch as getting a recovery operation started,
it's okay if it takes a while), I think I'd rather just wipe.

-- 
Hector Martin (marcan@marcan.st)
Public Key: https://mrcn.st/pub
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic