[prev in list] [next in list] [prev in thread] [next in thread] 

List:       lustre-discuss
Subject:    [Lustre-discuss] write RPC & congestion
From:       jeremy.filizetti () gmail ! com (Jeremy Filizetti)
Date:       2010-12-28 3:31:47
Message-ID: AANLkTimTK=YmifnrDede3CVB6E2o8h12UyWCCgnaR6gF () mail ! gmail ! com
[Download RAW message or body]

> 

> I agree with Oleg this is better approach also from another point of view.
> While Lustre tries to form full 1M or 4M (whatever) IO rpcs this is not
> always possible. One of such a cases is IO to many small files. There is
> just no way to pack into one IO rpc pages that belong to multiple files.
> This causes lots of small IO that definitely will under-load the network.
> 
> 
I'm really targeting sequential single client access were these RPCs will be
filled.


> While tuning max_rpc_in_flight you may want to control that network is not
> overloaded. This can be done with checking "threads_started" in service on
> server. This is number of threads that currently used for handling rpc on
> server for that service. If it stops growing with increasing
> max_rpc_in_flight - your network becoming bottleneck.
> 

It is just a single client connecting to 2 OSS.  I did check to make sure
through as well.  The test I just ran had 128 threads on each OSS.  The
latest data is incorporating the patch from bug 16900 but modifying
max_pages_per_rpc to make a 1 or 4 MB RPC.  I didn't see a huge difference
this time around and the test was slightly more balanced with respect to the
parameters used.  I have some tests running now with no patch at all instead
of just limited max_pages_per_rpc, but AFAIK those should be equivalent.

You can find the two new attachments at:
https://bugzilla.lustre.org/attachment.cgi?id=32618 and
https://bugzilla.lustre.org/attachment.cgi?id=32619

Jeremy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20101227/2a3db8ce/attachment.htm>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic