[prev in list] [next in list] [prev in thread] [next in thread] 

List:       zfs-discuss
Subject:    [zfs-discuss] Scrub found error in metadata:0x0,
From:       Jim Klimov <jimklimov () cos ! ru>
Date:       2011-11-30 17:01:45
Message-ID: qkGL0Vx4tgtK.SSnGVmJp () smtp ! cos ! ru
[Download RAW message or body]

Hello experts,

I've finally upgraded my troublesome oi-148a home storage box to oi-151a about a week \
ago (using pkg update method from the wiki page - i'm not certain if that repository \
is fixed at release version or is a sliding "current" one). 

After the OS upgrade i scrubbed my main pool - 6disk raidz2 - and some checksum \
errors were discovered on individual disks, with one non-correctable error on the \
raid level. It named a file which was indeed not readable (io errors) so i deleted \
it. The dataset pool/media has no snapshots, and dedup was disabled on it, so i hoped \
the error is gone.

I cleared the errors (this only zeroed the counters, but still complained that there \
were some metadata errors in pool/media:0x4) and reran the scrub. While the scrub was \
running, zpool status reported this error and metadata:0x0. The computer got hung and \
reset during the scrub, but apparently resumed from the same spot. When the operation \
completed, however, it had zero checksum errors at both disk and raid levels, the \
pool/media error was gone, but metadata:0x0 error is still in place. 

Searching the list archive i found a similar post relevant to snv134 and 135, and at \
that time Victor Latushkin suggested that the pool must be recreated. I have some \
unique data on the pool, so i'm reluctant to recreate it (besides, it's problematic \
to back up 10tb of data at home, and it can take weeks to try and upload it to my \
work - even if there were so much free space there, which is not).

So far i cleared the errors and started a new scrub. I kinda hope that if the box \
won't hang, it might discover that there are no actual errors indeed. I'll see that \
in about 100 hours. The pool is now imported and automounted, and i didn't yet try to \
export and reimport it.

Still, i'd like to estimate now what are my chances of living on without recreating \
the pool nor losing data? Perhaps, some ways to actually check, fix or forge the \
needed metadata? Also, previously a zdb walk found some inconsistencies (allocated !- \
referred); can that be better diagnosed or repaired? Can this discrepancy by a few \
sectors worth of size be a cause or be caused by that reported metadata error? \
Thanks, // Jim Klimov

sent from a mobile, pardon any typos ,)

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic