[prev in list] [next in list] [prev in thread] [next in thread] 

List:       evms-devel
Subject:    Re: [Evms-devel] RAID5 Expansion Question
From:       "David M. Strang" <dstrang () shellpower ! net>
Date:       2005-05-26 15:35:04
Message-ID: 000e01c56208$7e773850$be00a8c0 () NCNF5131FTH
[Download RAW message or body]


----- Original Message ----- 
From: Kevin Corry
To: evms-devel@lists.sourceforge.net
Cc: David M. Strang
Sent: Thursday, May 26, 2005 11:03 AM
Subject: Re: [Evms-devel] RAID5 Expansion Question


> On Mon May 23 2005 11:26 am, David M. Strang wrote:
> > On Monday, May 23, 2005 10:53 AM, Mike Tran wrote:
> > >> Hello -
> > >>
> > >> I just recently began looking at EVMS because I'm looking to 'grow' 
> > >> my
> > >> software raid 5. The raid consists of 14x72GB - I'm looking to double
> > >> it in size to 28x72GB.
> > >
> > >Make sure that the raid5 array was created using MD super block format
> > >1.  With MD 0.90 super block, The maximum number of disks supported is
> > >27.
> >
> > /dev/md0:
> >         Version : 00.90.01
> >   Creation Time : Sat Mar 19 08:46:12 2005
> >      Raid Level : raid5
> >      Array Size : 931864960 (888.70 GiB 954.23 GB)
> >     Device Size : 71681920 (68.36 GiB 73.40 GB)
> >    Raid Devices : 14
> >   Total Devices : 14
> > Preferred Minor : 0
> >     Persistence : Superblock is persistent
> >
> >
> > Looks like I'm using the old superblock; is it possible to upgrade it?
>
> I don't believe there's a way to switch the superblock version. Thus, 
> you're
> limited to 27 disks. Of course, I'm not sure that making a 27-way raid5 is
> the best approach. First, with raid5 you can only lose one disk at a time 
> and
> still have a useable array. Obviously the more disks you add, the higher 
> the
> chance of multiple disks failing at the same time. Second, the more disks 
> you
> add, the bigger a performance hit you're going to take on writes. Every 
> write
> to a raid5 must update the parity for the stripe that is written to. The 
> MD
> driver does do some caching to help this, but in the worst case a single
> write can cause 25 reads to get all the data to calculate the parity for 
> that
> stripe.
>
> It might be easier in this case to simply create a second raid5 region and
> drive-link the two together. It's even easier if you've built an LVM
> container on top of the raid5. You would simply add the new raid5 to the
> container and expand the LVM regions as desired onto the new freespace.

This is what I started to do initially; except I screwed up creating the 
drive-link feature. I did an 'Add Feature' to each array and added a 
drivelink... I couldn't remove it; so I ended up trashing the entire array 
and recreating it. I'm in the middle of restore now. The writes are 
moderately respectable still. I'm getting 650MB/minute from the tape drive; 
which is only 200MB/minute slower than the backup. I didn't try a write to 
the single array; but I know reads are much faster than writes anyways.


> > > >> The time estimates are like 256 hours... 10.5 days. Is this even 
> > > >> close
> > > >> to accurate? Does anyone have any timings on growing a raid? It 
> > > >> almost
> > > >> seems like it would be faster to back up the raid, recreate it, and
> > >> restore the raid.
> > >
> > >lowering the evms logging level to "error" might help.
> > >
> > >If your file system data is much less than 14x72GB, I would do back up,
> > >create a new raid5 and then restore data. It will be interesting to 
> > >find
> > >out the total time for this procedure.
> >
> > I do have a complete backup; however I'm sitting at 78% used - While I
> > could do the backup / recreate / restore method;
> > I'm looking for a best practice. It may not always be possible / 
> > worthwhile
> > to do a full backup restore.
>
> There's definitely a big performance issue with raid5 resizes. 
> Unfortunately,
> I've not had the time to do an in-depth analysis of the performance to try 
> to
> find ways to cut down on the resize time. But, it's on my EVMS to-do list. 
> :)

Part of it may just be the pain of a software raid 5 itself. It took nearly 
10 hours to synchronize an empty 1.9TB software raid. I'm curious to know 
how much faster it would be hardware based...


-- David M. Strang 



-------------------------------------------------------
This SF.Net email is sponsored by Yahoo.
Introducing Yahoo! Search Developer Network - Create apps using Yahoo!
Search APIs Find out how you can build Yahoo! directly into your own
Applications - visit http://developer.yahoo.net/?fr=offad-ysdn-ostg-q22005
_______________________________________________
Evms-devel mailing list
Evms-devel@lists.sourceforge.net
To subscribe/unsubscribe, please visit:
https://lists.sourceforge.net/lists/listinfo/evms-devel
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic