[prev in list] [next in list] [prev in thread] [next in thread] 

List:       zfs-discuss
Subject:    [zfs-discuss] Zpool vdev retains old name
From:       Uncle bob <chiln71 () gmail ! com>
Date:       2009-01-30 9:44:00
Message-ID: 1467662518.8741233337470042.JavaMail.Twebapp () sf-app1
[Download RAW message or body]

Hello All,

  I recently upgrade a test system that had a zpool (test_pool) from S10u5 to \
S10U6-zfsroot by simply replacing the root disks.  I exported the zpool before I init \
5'ed the system.  On S10u5, the zpool vdevs were on c2t#d#.  On S10U6-zfsroot, the \
zpool vdevs were on c4t#d#.  I ran zpool import to see the pool and everything showed \
up ok.  

  I then ran zpool import test_pool and import was successful.  I was asked to \
upgrade zpool version so I performed a zpool upgrade test_pool.  Next, I ran zpool \
status test_pool.  To my surprise, my hot spare had the old c2t#d# vdev name and was \
unavailable.  All other zpool vdevs had the new c4t#d# names.  I tried exporting the \
zpool one more time.  I reviewed the output of  zpool import, it showed zpool \
test_pool with all the correct vdevs names(c4t#d#) as online.

  I then pulled the spare and the spare vdev name changed from c4t#d# online to \
c2t#d# unavailable.  Re-inserted the spare and all zpool vdevs are c4t#d# and online.

  I re-imported the zpool but zpool status still showed spare with old vdev name and \
unavailable.  Has anyone seen this, if so how can I clean/reset/update/fix the zpool \
vdev names?

Thanks for your replies.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic