[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-xfs
Subject:    Re: Anyone using XFS in production on > 20TiB volumes?
From:       Chris Wedgwood <cw () f00f ! org>
Date:       2010-12-22 17:32:09
Message-ID: 20101222173209.GA29291 () puku ! stupidest ! org
[Download RAW message or body]

On Wed, Dec 22, 2010 at 12:10:06PM -0500, Justin Piszcz wrote:

> Do you have an example/of what you found?

i don't have the numbers anymore, they are with a previous employer.

basically using dbench (there were cifs NAS machines, so dbench seemed
as good or bad as anything to test with) the performance was about 3x
better between 'old' and 'new' with a small number of workers and
about 10x better with a large number

i don't know how much difference each of inode64 and getting the geom
right made each, but bother were quite measurable in the graphs i made
at the time


from memory the machines are raid50 (4x (5+1)) with 2TB drives, so
about 38TB usable on each one

initially these machines were 3ware controllers and later on LSI (the
two products lines have since merged so it's not clear how much
difference that makes now)

in testing 16GB for xfs_repair wasn't enough, so they were upped to
64GB, that's likely largely a result of the fact there were 100s of
millions of small files (as well as some large ones)

> Is it dependent on the RAID card?

perhaps, do you have a BBU and enable WC?  certainly we found the LSI
cards to be faster in most cases than the (now old) 3ware


where i am now i use larger chassis and no hw raid cards, using sw
raid on these works spectacularly well with the exception of burst of
small seeky writes (which a BBU + wc soaks up quite well)

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic