[prev in list] [next in list] [prev in thread] [next in thread] 

List:       drbd-user
Subject:    Re: [DRBD-user] Performance problem on drbd8 2-node cluster
From:       Marco Marino <marino.mrc () gmail ! com>
Date:       2019-05-06 12:12:20
Message-ID: CAFHVVuKzxmKVSJWj2GJehgg6sLiRb2LbSiMdQsZfWFSdDU-PNg () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


Thank you Gianni.
I tried to ask on the mailing list first because if I try to write on the
backing device, probably I need to resync drbd device when the cluster goes
up again.
Anyway, I will send tests on /dev/sdb as soon as possible.
Thank you

Il Lun 6 Mag 2019, 13:20 Gianni Milo <gianni.milo22@gmail.com> ha scritto:

> I would do same tests on the backing storage first, without drbd, cluster
> management or any other complexity involved. Then after confirming that
> there's no any kind of bottleneck there, I would slowly move to the "upper
> layers" ...
>
>
>
> On Mon, 6 May 2019 at 09:49, Marco Marino <marino.mrc@gmail.com> wrote:
>
>> Hello, I'm using drbd 8.4.11 on a two node cluster on top of centos 7.
>> Both servers have the same hardware configuration: same cpu, ram,
>> disks,...More precisely there is a Megaraid lsi SAS 9361-8i with a raid5
>> volume. CacheCade is enabled for both controllers and I have a raid0 volume
>> with 4x256GB SSD disks.
>> I'm trying to do same test with fio:
>>
>> fio --filename=/dev/mapper/vg1-vol2 --direct=1 --rw=randrw
>> --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=16k
>> --rwmixread=30 --iodepth=32 --numjobs=32 --runtime=60 --group_reporting
>> --name=16k7030test
>>
>> On node 1 I have:
>>
>> Run status group 0 (all jobs):
>>    READ: bw=259MiB/s (272MB/s), 259MiB/s-259MiB/s (272MB/s-272MB/s),
>> io=15.2GiB (16.3GB), run=60021-60021msec
>>   WRITE: bw=605MiB/s (635MB/s), 605MiB/s-605MiB/s (635MB/s-635MB/s),
>> io=35.5GiB (38.1GB), run=60021-60021msec
>>
>> I'm doing the test after this command:
>> pcs cluster standby  --> on node 2. So, there are no write through the
>> replication network and I can test the effective speed of the disk
>>
>> if I try to do the same thing from node2 I have a degraded performance:
>> Run status group 0 (all jobs):
>>    READ: bw=101MiB/s (105MB/s), 101MiB/s-101MiB/s (105MB/s-105MB/s),
>> io=6039MiB (6332MB), run=60068-60068msec
>>   WRITE: bw=234MiB/s (245MB/s), 234MiB/s-234MiB/s (245MB/s-245MB/s),
>> io=13.7GiB (14.7GB), run=60068-60068msec
>>
>>
>> Someone can give me an advice? Why this happens? I repeat: there is the
>> same configuration on both servers. I can check any parameter and I can
>> give more details if needed.
>>
>> Thank you
>>
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
>

[Attachment #5 (text/html)]

<div dir="auto"><div dir="auto">Thank you Gianni.  </div><div dir="auto">I tried to \
ask on the mailing list first because if I try to write on the backing device, \
probably I need to resync drbd device when the cluster goes up again.</div><div \
dir="auto">Anyway, I will send tests on /dev/sdb as soon as possible.</div><div \
dir="auto">Thank you</div><div dir="auto"></div><br><div class="gmail_quote"><div \
dir="ltr" class="gmail_attr">Il Lun 6 Mag 2019, 13:20 Gianni Milo &lt;<a \
href="mailto:gianni.milo22@gmail.com" rel="noreferrer noreferrer" \
target="_blank">gianni.milo22@gmail.com</a>&gt; ha scritto:<br></div><blockquote \
class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex"><div><div dir="auto">I would do same tests on the backing \
storage first, without drbd, cluster management or any other complexity involved. \
Then after confirming that there&#39;s no any kind of bottleneck there, I would \
slowly move to the &quot;upper layers&quot; ...</div></div><div \
dir="auto"><br></div><div dir="auto"><br></div><div><br><div class="gmail_quote"><div \
dir="ltr" class="gmail_attr">On Mon, 6 May 2019 at 09:49, Marco Marino &lt;<a \
href="mailto:marino.mrc@gmail.com" rel="noreferrer noreferrer noreferrer" \
target="_blank">marino.mrc@gmail.com</a>&gt; wrote:<br></div><blockquote \
class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div \
dir="ltr"><div dir="ltr">Hello, I&#39;m using drbd 8.4.11 on a two node cluster on \
top of centos 7. Both servers have the same hardware configuration: same cpu, ram, \
disks,...More precisely there is a Megaraid lsi SAS 9361-8i with a raid5 volume. \
CacheCade is enabled for both controllers and I have a raid0 volume with 4x256GB SSD \
disks.  </div><div>I&#39;m trying to do same test with \
fio:</div><div><br></div><div>fio --filename=/dev/mapper/vg1-vol2 --direct=1 \
--rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=16k \
--rwmixread=30 --iodepth=32 --numjobs=32 --runtime=60 --group_reporting \
--name=16k7030test</div><div><br></div><div>On node 1 I \
have:</div><div><br></div><div>Run status group 0 (all jobs):<br>     READ: \
bw=259MiB/s (272MB/s), 259MiB/s-259MiB/s (272MB/s-272MB/s), io=15.2GiB (16.3GB), \
run=60021-60021msec<br>   WRITE: bw=605MiB/s (635MB/s), 605MiB/s-605MiB/s \
(635MB/s-635MB/s), io=35.5GiB (38.1GB), \
run=60021-60021msec<br></div><div><br></div><div>I&#39;m doing the test after this \
command:</div><div>pcs cluster standby   --&gt; on node 2. So, there are no write \
through the replication network and I can test the effective speed of the \
disk<br></div><div><br></div><div>if I try to do the same thing from node2 I have a \
degraded performance:</div><div>Run status group 0 (all jobs):<br>     READ: \
bw=101MiB/s (105MB/s), 101MiB/s-101MiB/s (105MB/s-105MB/s), io=6039MiB (6332MB), \
run=60068-60068msec<br>   WRITE: bw=234MiB/s (245MB/s), 234MiB/s-234MiB/s \
(245MB/s-245MB/s), io=13.7GiB (14.7GB), \
run=60068-60068msec<br></div><div><br></div><div><br></div><div>Someone can give me \
an advice? Why this happens? I repeat: there is the same configuration on both \
servers. I can check any parameter and I can give more details if \
needed.<br></div><div><br></div><div>Thank \
you<br></div><div><br></div></div></div></div></div> \
_______________________________________________<br> drbd-user mailing list<br>
<a href="mailto:drbd-user@lists.linbit.com" rel="noreferrer noreferrer noreferrer" \
target="_blank">drbd-user@lists.linbit.com</a><br> <a \
href="http://lists.linbit.com/mailman/listinfo/drbd-user" rel="noreferrer noreferrer \
noreferrer noreferrer" \
target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br> \
</blockquote></div></div> </blockquote></div></div>



_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic