[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gpfsug-discuss
Subject:    Re: [gpfsug-discuss] IBM Flashsystem 7300 HDD sequential write performance issue
From:       Alec <anacreo () gmail ! com>
Date:       2024-01-23 22:32:30
Message-ID: CAGhSTwjNJd=qw2-dNLOLFmqTArp=ngbYA8jmF1D8nqzx7hx2LQ () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


I would want to understand what your test was and how you determined it's
single drive performance.  If you're just taking your aggregate throughout
and dividing by number of drives, you're probably missing entirely the most
restrictive part of the chain.

You can not pour water through a funnel then have tablespoons below it and
complain about the tablespoon performance.

Map out the actual bandwidth all the way through your chain, and every
choke point along the way and then make sure each point isn't constrained.

Starting from the test mechanism itself.

You can really rule out some things easily.

Go from single thread to multiple threads to rule out CPU bottlenecks.
Take a path out of the mix to see if the underlying connection is the
constraint, make a less wide raid config or a more wide raid config to see
if your performance changes.

Some of these changes will have no impact to your top throughout and you
can help to eliminate the variables that way.

Also are you saying that 32G is your aggregate throughout across multiple
FCs?  That's only 4GB/s.

Check out the fiber hardware and make sure you divided your work evenly
across port groups and have clear paths to the storage through each port
group, or ensure all the workload is in one portgroup and make sure you're
not exceeding that port groups speed.

Alec




On Tue, Jan 23, 2024, 6:06 AM Petr Plodík <petr.plodik@mcomputers.cz> wrote:

> Hi,
>
> we have GPFS cluster with two IBM FlashSystem 7300 systems with HD
> expansion and 80x 12TB HDD each (in DRAID 8+P+Q), 3 GPFS servers connected
> via 32G FC. We are doing performance tuning on sequential writes to HDDs
> and seeing suboptimal performance. After several tests, it turns out, that
> the bottleneck seems to be the single HDD write performance, which is below
> 40MB/s and one would expect at least 100MB/s.
>
> Does anyone have experiences with IBM flashsystem sequential write
> performance tuning or has these arrays in the infrastructure? We would
> really appreciate any help/explanation.
>
> Thank you!
>
> Petr Plodik
> M Computers s.r.o.
> petr.plodik@mcomputers.cz
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>

[Attachment #5 (text/html)]

<div dir="auto">I would want to understand what your test was and how you determined \
it&#39;s single drive performance.   If you&#39;re just taking your aggregate \
throughout and dividing by number of drives, you&#39;re probably missing entirely the \
most restrictive part of the chain.<div dir="auto"><br></div><div dir="auto">You can \
not pour water through a funnel then have tablespoons below it and complain about the \
tablespoon performance.</div><div dir="auto"><br></div><div dir="auto">Map out the \
actual bandwidth all the way through your chain, and every choke point along the way \
and then make sure each point isn&#39;t constrained.</div><div \
dir="auto"><br></div><div dir="auto">Starting from the test mechanism \
itself.</div><div dir="auto"><br></div><div dir="auto">You can really rule out some \
things easily.</div><div dir="auto"><br></div><div dir="auto">Go from single thread \
to multiple threads to rule out CPU bottlenecks.   Take a path out of the mix to see \
if the underlying connection is the constraint, make a less wide raid config or a \
more wide raid config to see if your performance changes.</div><div \
dir="auto"><br></div><div dir="auto">Some of these changes will have no impact to \
your top throughout and you can help to eliminate the variables that way.</div><div \
dir="auto"><br></div><div dir="auto">Also are you saying that 32G is your aggregate \
throughout across multiple FCs?   That&#39;s only 4GB/s.</div><div \
dir="auto"><br></div><div dir="auto">Check out the fiber hardware and make sure you \
divided your work evenly across port groups and have clear paths to the storage \
through each port group, or ensure all the workload is in one portgroup and make sure \
you&#39;re not exceeding that port groups speed.</div><div dir="auto"><br></div><div \
dir="auto">Alec</div><div dir="auto"><br></div><div dir="auto"><br></div><br><br><div \
class="gmail_quote" dir="auto"><div dir="ltr" class="gmail_attr">On Tue, Jan 23, \
2024, 6:06 AM Petr Plodík &lt;<a \
href="mailto:petr.plodik@mcomputers.cz">petr.plodik@mcomputers.cz</a>&gt; \
wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 \
.8ex;border-left:1px #ccc solid;padding-left:1ex">Hi, <br> <br>
we have GPFS cluster with two IBM FlashSystem 7300 systems with HD expansion and 80x \
12TB HDD each (in DRAID 8+P+Q), 3 GPFS servers connected via 32G FC. We are doing \
performance tuning on sequential writes to HDDs and seeing suboptimal performance. \
After several tests, it turns out, that the bottleneck seems to be the single HDD \
write performance, which is below 40MB/s and one would expect at least 100MB/s.<br> \
<br> Does anyone have experiences with IBM flashsystem sequential write performance \
tuning or has these arrays in the infrastructure? We would really appreciate any \
help/explanation.<br> <br>
Thank you!<br>
<br>
Petr Plodik<br>
M Computers s.r.o.<br>
<a href="mailto:petr.plodik@mcomputers.cz" target="_blank" \
rel="noreferrer">petr.plodik@mcomputers.cz</a><br> <br>
<br>
<br>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://gpfsug.org" rel="noreferrer noreferrer" \
target="_blank">gpfsug.org</a><br> <a \
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org" rel="noreferrer \
noreferrer" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org</a><br>
 </blockquote></div></div>



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic