[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-poweredge
Subject:    Re: CERC Performance Question
From:       Neil Jones <ncjones () cs ! ucsd ! edu>
Date:       2006-07-27 16:19:18
Message-ID: 46716A1D-6835-4CFA-9021-0CFAC68BAD43 () cs ! ucsd ! edu
[Download RAW message or body]

You, um, wouldn't happen to have the part number for that controller,  
would you?  I noticed that our database is running slower on the same  
query as an older and worser system that has the SATA drives managed  
through linux LVM.  I've resigned to live with it but if a simple  
controller switch would solve the problem, I'd be pleased.

..Neil

On Jul 27, 2006, at 9:11 AM, Fred Skrotzki wrote:

> We had other issues with the controller (one is that it is SATA 1 not
> the newer SATA 2).  What would could not find out till it was to late
> was that you can't go out and create a single array greater the 2 tb.
> It will truncate at that point.  We purchased a base 1800 with the
> controller and a single 80 gig sata the intention of pullign the 80  
> and
> maxing it out with 500 gig drives.
>
> In the end our solution was to go out and purchase a 8 port supported
> SATA 2 controller (Adaptec in this case) and it rocks compaired to the
> supplied one.
>
> -----Original Message-----
> From: linux-poweredge-bounces@dell.com
> [mailto:linux-poweredge-bounces@dell.com] On Behalf Of Neil Jones
> Sent: Thursday, July 27, 2006 11:44 AM
> To: Sturgis, Grant
> Cc: linux-poweredge@dell.com
> Subject: Re: CERC Performance Question
>
> Grant ---
>
> These numbers seem right.  I have the same system (4 drives, 500 Gb
> each) on a dual processor PE1800.
>
> I get slightly worse performance under RAID 5, actually. And the  
> kicker
> is that the write blocksize doesn't really affect performance
> so there doesn't seem to be any case where RAID5 works really well.
> Load avg jumping to 18 is also about right, and when that happens two
> things can occur:
>    - it's almost impossible to use the system, even from a console
>    - over prolonged periods (> 1 day), the aacraid driver starts  
> falling
> into an error mode and emits messages to syslog---and the system needs
> to be rebooted
>
> This happens with RHEL 4 and 3, and I do not believe it is specific to
> RH --- it seems like a hardware "feature".  If someone has the right
> kung-fu to suddenly make this configuration speedy, I'm all ears.
>
> My solution was to split the data into "stuff I really need to be  
> safe"
> and "stuff I can back up periodically".  Then I made two raid
> arrays: one RAID 0 and another RAID 1; I put the stuff I didn't need
> recoverable on the RAID 0 and the important stuff on the RAID 1.  Now
> the system is only somewhat slow compared to incredibly slow.
> (Writes are slow, but load only jumps to 2-3 and the system can be  
> used
> concurrently.  Reads are tolerably fast but no stellar.)
>
> Truthfully, this all seems quite slow for an SATA system but it's
> usable.  Dell, if you're listening, I think you should either *clearly
> mark* the CERC controller as a stop-gap measure for
> academics and flailing internet startups, or not offer it at all.
> Without a doubt, you need to remove the "supports RAID 5" from the
> website; while the controller might literally support the RAID5  
> storage
> scheme, it is too slow for even casual single user usage.  I was
> thrilled to be able to get 1.5 Tb of RAIDed space on a system good for
> database serving for about $5k, but it was deceptively inexpensive.
>
> ..Neil
>
> On Jul 25, 2006, at 7:41 AM, Sturgis, Grant wrote:
>
>> Can anyone comment if these numbers look normal or not?
>>
>> Thanks.
>>
>> Sturgis, Grant wrote:
>>> Greetings List,
>>>
>>> I am experiencing very poor performance with a CERC SATA 1.5/6ch  
>>> RAID
>
>>> card wtih firmware v4.1-0.  Four disks are configured in RAID 5 and
>>> the OS is RHEL ES 4.0.  I understand that this is not a high
>>> performance RAID card or RAID configuration (write penalty  
>>> associated
>
>>> with RAID 5), but this just seems ridiculous.
>>>
>>> Created a 5GB file with the command:
>>>
>>> dd if=/dev/zero of=big_file bs=1024 count=5120000
>>>
>>> and then timed a move operation from an NFS mount to the local RAID
>>> array connected at 1000Mbps end-to-end:
>>>
>>> time mv /hosts/server/test/big_file local_test_folder
>>>
>>> and the results were:
>>>
>>> 0.623u 34.342s 9:44.65 5.9%     0+0k 0+0io 3pf+0w
>>>
>>> This is over 10 minutes to move that much data over a gigabit
>>> connection.  What's even worse is that the load average on the  
>>> system
>>> exceeded 18 resulting in an unusable system for all other users.
>>> There
>>> was no response to any commands or login attempts.
>>>
>>> Does this seem reasonable to you?  What can I do to improve this
>>> performance (short of doing away with RAID 5)?
>>>
>>> Any comments and suggestions are very much appreciated.
>>>
>>> Thanks,
>>>
>>> Grant
>>> ----------------
>>>
>>>
>>>
>>>
>>> Pardon this rubbish:
>>>
>>
>>
>> This electronic message transmission is a PRIVATE communication which
>> contains information which may be confidential or privileged. The
>> information is intended to be for the use of the individual or entity
>> named above. If you are not the intended recipient, please be aware
>> that any disclosure, copying, distribution or use of the contents of
>> this information is prohibited. Please notify the sender  of the
>> delivery error by replying to this message, or notify us by telephone
>> (877-633-2436, ext. 0), and then delete it from your system.
>>
>> _______________________________________________
>> Linux-PowerEdge mailing list
>> Linux-PowerEdge@dell.com
>> http://lists.us.dell.com/mailman/listinfo/linux-poweredge
>> Please read the FAQ at http://lists.us.dell.com/faq
>
> _______________________________________________
> Linux-PowerEdge mailing list
> Linux-PowerEdge@dell.com
> http://lists.us.dell.com/mailman/listinfo/linux-poweredge
> Please read the FAQ at http://lists.us.dell.com/faq
>
> _______________________________________________
> Linux-PowerEdge mailing list
> Linux-PowerEdge@dell.com
> http://lists.us.dell.com/mailman/listinfo/linux-poweredge
> Please read the FAQ at http://lists.us.dell.com/faq

_______________________________________________
Linux-PowerEdge mailing list
Linux-PowerEdge@dell.com
http://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic