[prev in list] [next in list] [prev in thread] [next in thread] 

List:       bricolage-general
Subject:    Re: Load Testing
From:       David Wheeler <david () kineticode ! com>
Date:       2006-08-09 20:42:24
Message-ID: 086F5AA5-107D-43F9-A5EB-C6E363C0F30B () kineticode ! com
[Download RAW message or body]

On Aug 9, 2006, at 16:04, Matt Rolf wrote:

> We found some interesting things when we used sftp to preview  
> stories on a different box.  When you relate large pieces of media  
> (8MB+) to a story and preview it, the httpd process goes bonkers  
> and gobbles up all available CPU resources until it has moved the  
> related media pieces to the preview box. We expected to see an I/O  
> hit, but wait time was sitting at 3% or so in top.  Changing  
> ApacheSizeLimit didn't have any effect on this.   3 other users  
> were able to also publish large stories without a performance hit  
> at the same time as the massive one, but we could foresee issues  
> once the box becomes more heavily used.   Ultimately we will solve  
> this problem by previewing on the same box, but does anyone have an  
> idea why it might do that?

For SFTP distribution, be sure that you have Math::BigInt::GMP  
installed for optimal performance of Net::SFTP. Also, if you can  
upgrade to 1.10, enable the new AUTO_PREVIEW_MEDIA bricolage.conf  
directive. That will make it so that media files are distributed to  
the preview server(s) only once, when you upload them and save the  
media document, rather than *every time* you preview a story that has  
a relationship to said media document.

> 56MB saw processes spawn and die instantly.  11 seconds was a long  
> life.  100Mb saw nothing die.  Right now we're at 65MB and  
> processes are living between 1-10 minutes at the extremes.  As  
> usage goes up, life expectancy approaches 10 seconds and 7  
> requests.  As usage goes down, we have 10 minutes and 40+  
> requests.  Average is about 2 minutes, 25 requests.  We may adjust  
> it upward.

Yes, I think that I would. BTW, it turns out that what I was seeing  
at my customer site was Perl requesting a very large memory partition  
from a 64-bit system. It wasn't actually *using* that much memory.  
Rather, it was starting with 66MB processes and going from there. But  
I'm not convinced that Apache::SizeLimit is checking the correct  
memory value on 64 bit Linux, and, furthermore, the latest version of  
Apache::SizeLimit has a bug that prevents it from working *at all*  
with Bricolage. The fix is here, for those of you who need it:

   http://marc.theaimsgroup.com/?l=apache-modperl-dev&m=115438042519704

Best,

David

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic