[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-lvm
Subject:    Re: [linux-lvm] very slow sequential writes on lvm raid1 (bitmap?)
From:       "Alexander 'Leo' Bergolth" <leo () strike ! wu ! ac ! at>
Date:       2016-11-26 23:21:56
Message-ID: a0e079fb-d45f-bfbb-a51f-70824ac45710 () strike ! wu ! ac ! at
[Download RAW message or body]

On 2016-11-18 12:08, Alexander 'Leo' Bergolth wrote:
> I did my tests with two 5k-RPM SATA disks connected to a single USB 3.0
> port using a JMS562 USB 3.0 to SATA bridge in JBOD mode. According to
> lsusb -t, the uas module is in use and looking at
> /sys/block/sdX/queue/nr_requests, command queuing seems to be active.
> 
> I've discussed my problems with Heinz Mauelshagen yesterday, who was
> able to reproduce the issue using two SATA disks, connected to two USB
> 3.0 ports that share the same USB bus. However, he didn't notice any
> speed penalties if the same disks are connected to different USB buses.
> 
> So it looks like the problem is USB related...

I did some tests similar to Heinz Mauelshagens setup and connected my disks to two \
different USB 3.0 buses. Unfortunately I cannot confirm that some kind of USB \
congestion is the problem. I am getting the same results as when using just one USB \
bus: Smaller regionsizes dramatically slow down sequential write speed.

The reason why Heinz got different results was the different dd blocksize in our \
tests: I did my tests with bs=1M oflag=direct and Heinz used bs=1G oflag=direct. This \
leads to much less bitmap updates (>1000 vs 60 for 1G of data).

I'd expect that those bitmap updates cause two seeks each. This random IO is, of \
course, very expensive, especially if slow 5000 RPM disks are used...

I've recorded some tests with blktrace. The results can be downloaded from \
http://leo.kloburg.at/tmp/lvm-raid1-bitmap/


# lsusb -t
/:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/6p, 5000M
    |__ Port 4: Dev 3, If 0, Class=Hub, Driver=hub/4p, 5000M
        |__ Port 1: Dev 9, If 0, Class=Mass Storage, Driver=uas, 5000M
        |__ Port 2: Dev 8, If 0, Class=Mass Storage, Driver=uas, 5000M

# readlink -f /sys/class/block/sd[bc]/device/
/sys/devices/pci0000:00/0000:00:14.0/usb2/2-4/2-4.2/2-4.2:1.0/host2/target2:0:0/2:0:0:0
                
/sys/devices/pci0000:00/0000:00:14.0/usb2/2-4/2-4.1/2-4.1:1.0/host3/target3:0:0/3:0:0:0


# echo noop > /sys/block/sdb/queue/scheduler
# echo noop > /sys/block/sdc/queue/scheduler
# pvcreate /dev/sdb3 
# pvcreate /dev/sdc3 
# vgcreate vg_t /dev/sd[bc]3

# lvcreate --type raid1 -m 1 -L30G --regionsize=512k --nosync -y -n lv_t vg_t


# ---------- regionsize 512k, dd bs=1M oflags=direct
# blktrace -d /dev/sdb3 -d /dev/sdc3 -d /dev/vg_t/lv_t -D \
raid1-512k-reg-direct-bs-1M/ # dd if=/dev/zero of=/dev/vg_t/lv_t bs=1M count=1000 \
oflag=direct 1048576000 bytes (1,0 GB) copied, 55,7425 s, 18,8 MB/s

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   \
await r_await w_await  svctm  %util sdb3              0,00     0,00    0,00   54,00   \
0,00 18504,00   685,33     0,14    2,52    0,00    2,52   1,70   9,20 sdc3            \
0,00     0,00    0,00   54,00     0,00 18504,00   685,33     0,14    2,52    0,00    \
2,52   1,67   9,00 dm-9              0,00     0,00    0,00   18,00     0,00 18432,00  \
2048,00     1,00   54,06    0,00   54,06  55,39  99,70

# ---------- regionsize 512k, dd bs=1G oflags=direct
# blktrace -d /dev/sdb3 -d /dev/sdc3 -d /dev/vg_t/lv_t -D \
raid1-512k-reg-direct-bs-1G/ # dd if=/dev/zero of=/dev/vg_t/lv_t bs=1G count=1 \
oflag=direct 1+0 records in
1+0 records out
1073741824 bytes (1,1 GB) copied, 7,3139 s, 147 MB/s

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   \
await r_await w_await  svctm  %util sdb3              0,00     0,00    0,00  306,00   \
0,00 156672,00  1024,00   135,47  441,34    0,00  441,34   3,27 100,00 sdc3           \
0,00     0,00    0,00  302,00     0,00 154624,00  1024,00   129,46  421,76    0,00  \
421,76   3,31 100,00 dm-9              0,00     0,00    0,00    0,00     0,00     \
0,00     0,00   648,81    0,00    0,00    0,00   0,00 100,00


# ---------- regionsize 512k, dd bs=1M conv=fsync
# blktrace -d /dev/sdb3 -d /dev/sdc3 -d /dev/vg_t/lv_t -D raid1-512k-reg-fsync-bs-1M/
# dd if=/dev/zero of=/dev/vg_t/lv_t bs=1M count=1000 conv=fsync
1048576000 bytes (1,0 GB) copied, 7,75605 s, 135 MB/s

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   \
await r_await w_await  svctm  %util sdb3              0,00 21971,00    0,00  285,00   \
0,00 145920,00  1024,00   141,99  540,75    0,00  540,75   3,51 100,00 sdc3           \
0,00 21971,00    0,00  310,00     0,00 158720,00  1024,00   106,86  429,35    0,00  \
429,35   3,23 100,00 dm-9              0,00     0,00    0,00    0,00     0,00     \
0,00     0,00 24561,60    0,00    0,00    0,00   0,00 100,00


Cheers,
--leo
-- 
e-mail   ::: Leo.Bergolth (at) wu.ac.at   
fax      ::: +43-1-31336-906050
location ::: IT-Services | Vienna University of Economics | Austria

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic