[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gpfsug-discuss
Subject:    Re: [gpfsug-discuss] sequential I/O write - performance tuning
From:       Jan-Frode Myklebust <janfrode () tanso ! net>
Date:       2024-02-07 23:00:01
Message-ID: CAHwPathA=D88852-RSrs+VoNxSq67ha+fgfFOLiMr1_fiNO+KA () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


Also, please show mmlsconfig output.

  -jf

ons. 7. feb. 2024 kl. 20:32 skrev Aaron Knister <aaron.knister@gmail.com>:

> What does iostat output look like when you're running the tests on GPFS?
> It would be good to confirm that GPFS is successfully submitting 2MB I/o
> requests.
>
> Sent from my iPhone
>
> On Feb 7, 2024, at 08:08, Michal Hruška <Michal.Hruska@mcomputers.cz>
> wrote:
>
> 
>
> Dear gpfsUserGroup,
>
>
>
> we are dealing with new gpfs cluster (Storage Scale 5.1.9 – RHEL 9.3) [3
> FE servers and one storage system] and some performance issues.
>
> We were able to tune underlying storage system to get to performance ~4500
> MiB/s from 8 RAID groups using XFS (one XFS per one RAID group) and
> parallel fio test.
>
> Once we installed the gpfs – one FS accross all 8 RAID groups and we
> observed performance drop down to ~3300 MiB/s using the same fio test.
>
> All tests were performed from one front-end node connected directly to the
> storage system via FibreChannel (4 paths each path is 32GFC).
>
>
>
> Storage systems RAID groups are sized to fit 2MB data blocks to utilize
> full-stripe writes as the RAID geometry is 8+2 using 256KB block size ->
> 8*256KB = 2MB.
>
> I/O pattern on FC is optimized too.
>
> GPFS metadata were moved to NVMe SSDs on different storage system.
>
> We already tried some obvious troubleshooting on gpfs side like: maxMBpS,
> scatter/cluster, different block sizes and some other parameters but there
> is no performance gain.
>
>
>
> We were advised that gpfs might not perform pure sequential writes towards
> the storage system and therefore the storage system is performing more
> random I/O than sequential.
>
>
>
> Could you please share with us some thougths about how to make gpfs as
> much sequential as possible? The goal is to reach at least 4000 MiB/s for
> sequential writes/reads.
>
>
>
> best regards,
>
> *Michal Hruška*
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>

[Attachment #5 (text/html)]

<div dir="auto">Also, please show mmlsconfig output.</div><div \
dir="auto"><br></div><div dir="auto">   -jf</div><div><br><div \
class="gmail_quote"><div dir="ltr" class="gmail_attr">ons. 7. feb. 2024 kl. 20:32 \
skrev Aaron Knister &lt;<a \
href="mailto:aaron.knister@gmail.com">aaron.knister@gmail.com</a>&gt;:<br></div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)"><div \
dir="auto">What does iostat output look like when you're running the tests on GPFS? \
It would be good to confirm that GPFS is successfully submitting 2MB I/o requests.  \
<div><br id="m_4186731806594279749lineBreakAtBeginningOfSignature"><div \
dir="ltr">Sent from my iPhone</div><div dir="ltr"><br><blockquote type="cite">On Feb \
7, 2024, at 08:08, Michal Hruška &lt;<a href="mailto:Michal.Hruska@mcomputers.cz" \
target="_blank">Michal.Hruska@mcomputers.cz</a>&gt; \
wrote:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr">

</div></blockquote></div></div><div dir="auto"><div><blockquote type="cite"><div \
dir="ltr">




<div>
<p class="MsoNormal">Dear gpfsUserGroup,<u></u><u></u></p>
<p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal">we are dealing with new gpfs cluster (Storage Scale 5.1.9 – \
RHEL 9.3) [3 FE servers and one storage system] and some performance \
issues.<u></u><u></u></p> <p class="MsoNormal">We were able to tune underlying \
storage system to get to performance ~4500 MiB/s from 8 RAID groups using XFS (one \
XFS per one RAID group) and parallel fio test.<u></u><u></u></p> <p \
class="MsoNormal">Once we installed the gpfs – one FS accross all 8 RAID groups and \
we observed performance drop down to ~3300 MiB/s using the same fio \
test.<u></u><u></u></p> <p class="MsoNormal">All tests were performed from one \
front-end node connected directly to the storage system via FibreChannel (4 paths \
each path is 32GFC). <u></u><u></u></p>
<p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal">Storage systems RAID groups are sized to fit 2MB data blocks to \
utilize full-stripe writes as the RAID geometry is 8+2 using 256KB block size -&gt; \
8*256KB = 2MB.<u></u><u></u></p> <p class="MsoNormal">I/O pattern on FC is optimized \
too.<u></u><u></u></p> <p class="MsoNormal">GPFS metadata were moved to NVMe SSDs on \
different storage system.<u></u><u></u></p> <p class="MsoNormal">We already tried \
some obvious troubleshooting on gpfs side like: maxMBpS, scatter/cluster, different \
block sizes and some other parameters but there is no performance \
gain.<u></u><u></u></p> <p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal">We were advised that gpfs might not perform pure sequential \
writes towards the storage system and therefore the storage system is performing more \
random I/O than sequential. <u></u><u></u></p>
<p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal">Could you please share with us some thougths about how to make \
gpfs as much sequential as possible? The goal is to reach at least 4000 MiB/s for \
sequential writes/reads.<u></u><u></u></p> <p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal"><span style="font-size:9pt;font-family:Tahoma,sans-serif">best \
regards,<u style="font-family:Tahoma,sans-serif"></u><u \
style="font-family:Tahoma,sans-serif"></u></span></p> <p class="MsoNormal"><b><span \
style="font-size:12pt;font-family:Tahoma,sans-serif;color:rgb(220,38,30)">Michal \
Hruška</span></b><span \
style="font-size:12pt;font-family:Tahoma,sans-serif;color:rgb(220,38,30)"><u \
style="font-family:Tahoma,sans-serif"></u><u \
style="font-family:Tahoma,sans-serif"></u></span></p> <p class="MsoNormal"><span \
style="font-size:9pt;font-family:Tahoma,sans-serif"><br> <br>
</span><u></u><u></u></p>
<p class="MsoNormal"><u></u>  <u></u></p>
</div>


<span>_______________________________________________</span><br><span>gpfsug-discuss \
mailing list</span><br><span>gpfsug-discuss at <a href="http://gpfsug.org" \
target="_blank">gpfsug.org</a></span><br><span><a \
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org" \
target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org</a></span \
><br></div></blockquote></div></div>_______________________________________________<br>
> 
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://gpfsug.org" rel="noreferrer" \
target="_blank">gpfsug.org</a><br> <a \
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org" rel="noreferrer" \
target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org</a><br> \
</blockquote></div></div>



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic