[prev in list] [next in list] [prev in thread] [next in thread] 

List:       zfs-discuss
Subject:    [zfs-discuss] ZFS Raidz2 problem, detached drive
From:       andy cowling <acowling () hotmail ! com>
Date:       2010-09-30 11:33:02
Message-ID: 2109083695.211285871617884.JavaMail.Twebapp () sf-app1
[Download RAW message or body]

I have an X4500 thumper box with 48x 500gb drives setup in a a pool and split into \
raidz2 sets of 8 - 10 drives within the single pool.

I had a failed disk with i cfgadm unconfigured and replaced no problem, but it wasn't \
recognised as a Sun drive in Format and unbeknown to me someone else logged in \
remotely at the time and issued a zpool replace....

I corrected the system/drive recognition problem, drive seen and partitioned all ok \
but zpool showed two instances for the same drive, one as failed with corrupt data, \
the other as online but still in a degraded state as the spare had been utilized.

I tried a zpool clear device, zpool scrub, zpool replace all with no joy...then and i \
kick myself now i thought i 'll detach and reattach the drive....

Drive detached no problem, no questions asked, failed drive still in zpool status, \
online one gone, reattach dosn't seem possible.

As a temporary solution in case of further failures i've attached the new drive as a \
hot spare...

My question is....how do i reattach the drive to the raidz2 set?
Can i use the replace command to replace the currently used spare with the new drive \
if i first remove it as a hot spare?

Or do i have to delete the whole pool and restore 24 TB of data...please no....!!!
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic