[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ceph-users
Subject:    [ceph-users] active+clean+inconsistent with invisible error
From:       gfarnum () redhat ! com (Gregory Farnum)
Date:       2017-04-27 21:48:49
Message-ID: CAJ4mKGbnMfJAdBNBtFLeJD_V6+6ehCVJO-qDqwTEV11WP49xgw () mail ! gmail ! com
[Download RAW message or body]

On Thu, Apr 27, 2017 at 1:47 PM, Dzianis Kahanovich <mahatma at bspu.by> wrote:
> Dzianis Kahanovich ?????:
>>
>> I have 1 active+clean+inconsistent PG (from metadata pool) without real error
>> reporting and any other symphtoms. All 3 copies same (md5sum). Deep-scrub,
>> repair, etc just say "1 errors 0 fixed" in the end. I remember, it PG may be
>> hand-repaired near September, but long time was no problems until HDD changes
>> (less 4T -> 4T). I am trying to insert additional diagnostic output in PG.cc,
>> but while no interesting results. How to know more (and fix)?
>>
>
>
> Sorry, found more: "omap_digest_mismatch":
> rados list-inconsistent-obj 6.5a  --format=json-pretty

Yep, that means the omap information stream is not the same across
replicas. You'll want to explore using the ceph-objecstore-tool to
look at them, work out which one is wrong, and remove it. I've not
done this but David has described its use a few times if you search
through the list archives.
-Greg

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic