[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-btrfs
Subject:    Re: How to remove a device on a RAID-1 before replacing it?
From:       Hugo Mills <hugo-lkml () carfax ! org ! uk>
Date:       2011-03-29 21:15:46
Message-ID: 20110329211546.GA4082 () carfax ! org ! uk
[Download RAW message or body]

On Tue, Mar 29, 2011 at 05:01:39PM -0400, Andrew Lutomirski wrote:
> On Tue, Mar 29, 2011 at 4:21 PM, cwillu <cwillu@cwillu.com> wrote:
> > On Tue, Mar 29, 2011 at 2:09 PM, Andrew Lutomirski <luto@mit.edu> wrote:
> >> I have a disk with a SMART failure.  It still works but I assume it'll
> >> fail sooner or later.
> >>
> >> I want to remove it from my btrfs volume, replace it, and add the new
> >> one.  But the obvious command doesn't work:
> >>
> >> # btrfs device delete /dev/dm-5 /mnt/foo
> >> ERROR: error removing the device '/dev/dm-5'
> >>
> >> dmesg says:
> >> btrfs: unable to go below two devices on raid1
> >>
> >> With mdadm, I would fail the device, remove it, run degraded until I
> >> get a new device, and hot-add that device.
> >>
> >> With btrfs, I'd like some confirmation from the fs that data is
> >> balanced appropriately so I won't get data loss if I just yank the
> >> drive.  And I don't even know how to tell btrfs to release the drive
> >> so I can safely remove it.
> >>
> >> (Mounting with -o degraded doesn't help.  I could umount, remove the
> >> disk, then remount, but that feels like a hack.)
> >
> > There's no "nice" way to remove a failing disk in btrfs right now
> > ("btrfs dev delete" is more of a online management thing to politely
> > remove a perfectly functional disk you'd like to use for something
> > else.)  As I understand things, the only way to do it right now is the
> > umount, remove disk, remount w/ degraded, and then btrfs add the new
> > device.
> >
> 
> Well, the disk *is* perfectly functional.  It just won't be for long.
> 
> I guess what I'm saying is that either btrfs dev delete isn't really
> working -- I want to be able to convert to non-RAID and back or
> degraged and back or something else equivalent.

   RAID conversion isn't quite ready yet, sadly. As I understand it,
you've got two options:

 - Yoink the drive (thus making the fs run in degraded mode), add the
   new one, and balance to spread the duplicate data onto the new
   volume.

 - Add the new drive to the FS first, then use btrfs dev del to remove
   the original device. This should end up writing all the replicated
   data to the new drive as it "removes" the data from the old one.

   Of the two options, the latter is (for me) the favourite, as you
don't end up with a filesystem that's running on just a single copy of
the data.

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
  --- Prof Brain had been in search of The Truth for 25 years, with ---  
             the intention of putting it under house arrest.             

["signature.asc" (application/pgp-signature)]
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic