[prev in list] [next in list] [prev in thread] [next in thread] 

List:       zope
Subject:    Re: [Zope] large images to a database via zope.
From:       Toby Dickenson <tdickenson () devmail ! geminidataloggers ! co ! uk>
Date:       2001-04-17 10:51:11
[Download RAW message or body]

On Tue, 17 Apr 2001 05:00:35 -0400, ethan mindlace fremen
<mindlace@digicool.com> wrote:

>--On Tuesday, April 17, 2001 02:41:05 -0400 marc lindahl <marc@bowery.com> 
>wrote:
>
>> So much for size, now for performance.  Ethan, though zope isn't
>> 'optimized for the rapid delivery of large binary objects', is it better
>> at pulling them out of an object than the local FS?  OR via a DB adapter?
>> For any particular reasons (multithreading, maybe?)
>
>Well, a thread is locked for the entire time it is writing out to the end 
>user. So get 4 simultaneous requests for a large file, and your site is now 
>unresponsive (in a stock install.)

Im sure thats not true. Data is queued by medusa 'producers', and the
separate medusa thread takes care of trickling responses back to
clients.

The worker threads (4 of them in a stock install) are only blocked for
as long as it takes them calculate the response and then hand it over
to medusa.

Most responses get buffered in memory, but zope's File and Image
object take some special steps to ensure that the producer buffers
data in a temporary file if the data is 'large' (see HTTPResponse.py
for details). This file copy is the only 'unnatural' overhead of
serving large objects from zope.




Toby Dickenson
tdickenson@geminidataloggers.com

_______________________________________________
Zope maillist  -  Zope@zope.org
http://lists.zope.org/mailman/listinfo/zope
**   No cross posts or HTML encoding!  **
(Related lists -
 http://lists.zope.org/mailman/listinfo/zope-announce
 http://lists.zope.org/mailman/listinfo/zope-dev )

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic