[prev in list] [next in list] [prev in thread] [next in thread] 

List:       freebsd-fs
Subject:    Re: panic: solaris assert: rt->rt_space == 0 (0xe000 == 0x0), file: /usr/src/sys/cddl/contrib/openso
From:       Fabian Keil <freebsd-listen () fabiankeil ! de>
Date:       2015-02-21 11:21:53
Message-ID: 48e5e0b3.6c036ece () fabiankeil ! de
[Download RAW message or body]


Fabian Keil <freebsd-listen@fabiankeil.de> wrote:

> Fabian Keil <freebsd-listen@fabiankeil.de> wrote:
> 
> > Using an 11.0-CURRENT based on r276255 I just got a panic
> > after trying to export a certain ZFS pool:
[...]
> > The export triggered the same panic again, but with a different rt->rt_space \
> > value: 
> > panic: solaris assert: rt->rt_space == 0 (0x22800 == 0x0), file: \
> > /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/range_tree.c, line: 153 
> > I probably won't have time to scrub the pool and investigate this further
> > until next week.
> 
> With this patch and vfs.zfs.recover=1 the pool can be exported without panic:
> https://www.fabiankeil.de/sourcecode/electrobsd/range_tree_destroy-Optionally-tolerate-non-zero-rt-r.diff
> 
[...]
> Due to interruptions the scrubbing will probably take a couple of days.
> ZFS continues to complain about checksum errors but apparently no
> affected files have been found yet:

The results are finally in: OpenZFS found nothing to repair but continues
to complain about checksum errors, presumably in "<0xffffffffffffffff>:<0x0>"
which totally looks like a legit path that may affect my applications:

fk@r500 ~ $zogftw zpool status -v
2015-02-21 12:06:29 zogftw: Executing: zpool status -v wde4 
  pool: wde4
 state: ONLINE
status: One or more devices has experienced an error resulting in data
	corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
	entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub repaired 0 in 134h55m with 0 errors on Sat Feb 21 12:00:47 2015
config:

	NAME              STATE     READ WRITE CKSUM
	wde4              ONLINE       0     0   795
	  label/wde4.eli  ONLINE       0     0 3.11K

errors: Permanent errors have been detected in the following files:

        <0xffffffffffffffff>:<0x0>
fk@r500 ~ $zogftw export
2015-02-21 12:07:03 zogftw: No zpool specified. Exporting all external ones: wde4
2015-02-21 12:07:03 zogftw: Exporting wde4
fk@r500 ~ $zogftw import
2015-02-21 12:07:13 zogftw: No pool name specified. Trying all unattached labels: \
wde4 2015-02-21 12:07:13 zogftw: Using geli keyfile \
/home/fk/.config/zogftw/geli/keyfiles/wde4.key 2015-02-21 12:07:25 zogftw: 'wde4' \
attached 2015-02-21 12:08:07 zogftw: 'wde4' imported
fk@r500 ~ $zogftw zpool status -v
2015-02-21 12:08:13 zogftw: Executing: zpool status -v wde4 
  pool: wde4
 state: ONLINE
status: One or more devices has experienced an error resulting in data
	corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
	entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub repaired 0 in 134h55m with 0 errors on Sat Feb 21 12:00:47 2015
config:

	NAME              STATE     READ WRITE CKSUM
	wde4              ONLINE       0     0     9
	  label/wde4.eli  ONLINE       0     0    36

errors: Permanent errors have been detected in the following files:

        <0xffffffffffffffff>:<0x0>

Exporting the pool still triggers the sanity check in range_tree_destroy().

Fabian


[Attachment #3 (application/pgp-signature)]

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic