[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-xfs
Subject:    Re: Practical file system size question
From:       Rafa Griman <rafagriman () gmail ! com>
Date:       2013-04-17 14:02:15
Message-ID: CANRt_=a91yzfw0f+5ktwUk0w5F4PuM8aWrPA0hONg1i=P2OgWQ () mail ! gmail ! com
[Download RAW message or body]

Hi :)

On Wed, Apr 17, 2013 at 3:18 PM, Robert Bennett
<rbennett@mail.crawford.com> wrote:
>
> We have been running our Storage on XFS for over three years now and are
> extremely happy.  We are running each file system on LSI Hardware Raid with
> 3 RAID groups of 12+2 with 3 Hot Spares, and 8 file systems per Head Node.
> These are running on 2TB SAS HDDs.  The individual file system size is 66TB
> in this configuration.  The time has come to look into moving to 3TB SAS
> HDDs.  With very rudimentary math, this should move us to the neighborhood
> of 99TB per file system.  Our OS is linux 2.6.32-279.11.1.el6.x86_64.
>
> The question is - does anyone have experience with this type of
> configuration and in particular with 3TB HDDs and a file system size of
> 99TB?  The rebuild time with 2TB drives is ~ 24 hours.  Should I expect the
> rebuild time for the 3TB drives to be ~ 36 hours?
>
> Thanks for all the hard work all of you do on a file system that continues
> to dazzle.


When you say "LSI Hardware Raid", I assume it's some sort of
NetApp/Engenio storage array (aka E2600, E2400, E5500). Am I correct?
If so, you should try the new their new feature Dynamic Disk Pooling:

http://www.netapp.com/us/system/pdf-reader.aspx?m=ds-3309.pdf&cc=us

http://www.netapp.com/us/technology/dynamic-disk-pools.aspx

It lowers your rebuild times quite a lot.

If you mean internal LSI RAID PCIe controllers in a server ... can't
be of much help here :(

HTH

   Rafa

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic