[prev in list] [next in list] [prev in thread] [next in thread] 

List:       lustre-discuss
Subject:    Re: [lustre-discuss] LFS tuning hierarchy question
From:       Patrick Farrell <pfarrell () whamcloud ! com>
Date:       2019-01-25 2:59:59
Message-ID: DM6PR19MB2508C62ACB4E851227F3AD8CC59B0 () DM6PR19MB2508 ! namprd19 ! prod ! outlook ! com
[Download RAW message or body]

Ah, I understand.  Yes, that’s correct.  You can also set the value on the MGS for \
that file system with lctl set_param -P mdc.*.max_rpcs_in_flight=32, that will apply \
on the clients,

How are you checking the value on the server?  There should be no MDC there.  If it \
is instead associated with the MDT, then that is, I believe, a maximum and not a \
default. ________________________________
From: Ms. Megan Larko <dobsonunit@gmail.com>
Sent: Thursday, January 24, 2019 8:24:31 PM
To: Lustre User Discussion Mailing List; Patrick Farrell
Subject: [lustre-discuss] LFS tuning hierarchy question

Thank you for the information, Patrick.

On my current Lustre client all Lustre File Systems mounted (called /mnt/foo and \
/mnt/bar in my example) display a connection value for max_rpcs_in_flight = 8 for \
both file systems--the /mnt/foo on which the server has max_rpcs_in_flight = 8 and \
also for /mnt/bar on which the Lustre server indicates max_rpcs_in_flight = 32.

So using the Lustre 2.7.2 client default behavior all of the Lustre mounts viewed on \
the client are max_rpcs_in_flight = 8.

I am assuming that I will need to set the value for max_rpcs_in_flight to 32 on the \
client and then the client will pick up the 32 where 32 is possible from the Lustre \
File System server and 8 on those Lustre File Systems where the servers have not \
increased the default value for that parameter.

Is this correct?

Cheers,
megan


[Attachment #3 (text/html)]

<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
</head>
<body>
Ah, I understand.&nbsp; Yes, that’s correct.&nbsp; You can also set the value on the \
MGS for that file system with lctl set_param -P mdc.*.max_rpcs_in_flight=32, that \
will apply on the clients,<br> <br>
How are you checking the value on the server?&nbsp; There should be no MDC \
there.&nbsp; If it is instead associated with the MDT, then that is, I believe, a \
maximum and not a default. <hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" \
style="font-size:11pt" color="#000000"><b>From:</b> Ms. Megan Larko \
&lt;dobsonunit@gmail.com&gt;<br> <b>Sent:</b> Thursday, January 24, 2019 8:24:31 \
PM<br> <b>To:</b> Lustre User Discussion Mailing List; Patrick Farrell<br>
<b>Subject:</b> [lustre-discuss] LFS tuning hierarchy question</font>
<div>&nbsp;</div>
</div>
<div>
<div dir="ltr">
<div>Thank you for the information, Patrick.</div>
<div><br>
</div>
<div>On my current Lustre client all Lustre File Systems mounted (called /mnt/foo and \
/mnt/bar in my example) display a connection value for max_rpcs_in_flight = 8 for \
both file systems--the /mnt/foo on which the server has max_rpcs_in_flight = 8 and \
also for  /mnt/bar on which the Lustre server indicates max_rpcs_in_flight = \
32.</div> <div><br>
</div>
<div>So using the Lustre 2.7.2 client default behavior all of the Lustre mounts \
viewed on the client are max_rpcs_in_flight = 8.&nbsp; <br>
</div>
<div><br>
</div>
<div>I am assuming that I will need to set the value for max_rpcs_in_flight to 32 on \
the client and then the client will pick up the 32 where 32 is possible from the \
Lustre File System server and 8 on those Lustre File Systems where the servers have \
not increased  the default value for that parameter.</div>
<div><br>
</div>
<div>Is this correct?</div>
<div><br>
</div>
<div>Cheers,</div>
<div>megan<br>
</div>
</div>
</div>
</body>
</html>



_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

--===============0420104512010640305==--

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic