[prev in list] [next in list] [prev in thread] [next in thread] 

List:       zfs-discuss
Subject:    Re: [zfs-discuss] Periodic flush
From:       Robert Milkowski <milek () task ! gda ! pl>
Date:       2008-06-30 23:01:03
Message-ID: 1703500900.20080701000103 () task ! gda ! pl
[Download RAW message or body]

Hello Roch,

Saturday, June 28, 2008, 11:25:17 AM, you wrote:


RB> I suspect,  a single dd is cpu bound.

I don't think so.

Se below one with a stripe of 48x disks again. Single dd with 1024k
block size and 64GB to write.

bash-3.2# zpool iostat 1
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
test         333K  21.7T      1      1   147K   147K
test         333K  21.7T      0      0      0      0
test         333K  21.7T      0      0      0      0
test         333K  21.7T      0      0      0      0
test         333K  21.7T      0      0      0      0
test         333K  21.7T      0      0      0      0
test         333K  21.7T      0      0      0      0
test         333K  21.7T      0      0      0      0
test         333K  21.7T      0  1.60K      0   204M
test         333K  21.7T      0  20.5K      0  2.55G
test        4.00G  21.7T      0  9.19K      0  1.13G
test        4.00G  21.7T      0      0      0      0
test        4.00G  21.7T      0  1.78K      0   228M
test        4.00G  21.7T      0  12.5K      0  1.55G
test        7.99G  21.7T      0  16.2K      0  2.01G
test        7.99G  21.7T      0      0      0      0
test        7.99G  21.7T      0  13.4K      0  1.68G
test        12.0G  21.7T      0  4.31K      0   530M
test        12.0G  21.7T      0      0      0      0
test        12.0G  21.7T      0  6.91K      0   882M
test        12.0G  21.7T      0  21.8K      0  2.72G
test        16.0G  21.7T      0    839      0  88.4M
test        16.0G  21.7T      0      0      0      0
test        16.0G  21.7T      0  4.42K      0   565M
test        16.0G  21.7T      0  18.5K      0  2.31G
test        20.0G  21.7T      0  8.87K      0  1.10G
test        20.0G  21.7T      0      0      0      0
test        20.0G  21.7T      0  12.2K      0  1.52G
test        24.0G  21.7T      0  9.28K      0  1.14G
test        24.0G  21.7T      0      0      0      0
test        24.0G  21.7T      0      0      0      0
test        24.0G  21.7T      0      0      0      0
test        24.0G  21.7T      0  14.5K      0  1.81G
test        28.0G  21.7T      0  10.1K  63.6K  1.25G
test        28.0G  21.7T      0      0      0      0
test        28.0G  21.7T      0  10.7K      0  1.34G
test        32.0G  21.7T      0  13.6K  63.2K  1.69G
test        32.0G  21.7T      0      0      0      0
test        32.0G  21.7T      0      0      0      0
test        32.0G  21.7T      0  11.1K      0  1.39G
test        36.0G  21.7T      0  19.9K      0  2.48G
test        36.0G  21.7T      0      0      0      0
test        36.0G  21.7T      0      0      0      0
test        36.0G  21.7T      0  17.7K      0  2.21G
test        40.0G  21.7T      0  5.42K  63.1K   680M
test        40.0G  21.7T      0      0      0      0
test        40.0G  21.7T      0  6.62K      0   844M
test        44.0G  21.7T      1  19.8K   125K  2.46G
test        44.0G  21.7T      0      0      0      0
test        44.0G  21.7T      0      0      0      0
test        44.0G  21.7T      0  18.0K      0  2.24G
test        47.9G  21.7T      1  13.2K   127K  1.63G
test        47.9G  21.7T      0      0      0      0
test        47.9G  21.7T      0      0      0      0
test        47.9G  21.7T      0  15.6K      0  1.94G
test        47.9G  21.7T      1  16.1K   126K  1.99G
test        51.9G  21.7T      0      0      0      0
test        51.9G  21.7T      0      0      0      0
test        51.9G  21.7T      0  14.2K      0  1.77G
test        55.9G  21.7T      0  14.0K  63.2K  1.73G
test        55.9G  21.7T      0      0      0      0
test        55.9G  21.7T      0      0      0      0
test        55.9G  21.7T      0  16.3K      0  2.04G
test        59.9G  21.7T      0  14.5K  63.2K  1.80G
test        59.9G  21.7T      0      0      0      0
test        59.9G  21.7T      0      0      0      0
test        59.9G  21.7T      0  17.7K      0  2.21G
test        63.9G  21.7T      0  4.84K  62.6K   603M
test        63.9G  21.7T      0      0      0      0
test        63.9G  21.7T      0      0      0      0
test        63.9G  21.7T      0      0      0      0
test        63.9G  21.7T      0      0      0      0
test        63.9G  21.7T      0      0      0      0
test        63.9G  21.7T      0      0      0      0
test        63.9G  21.7T      0      0      0      0
^C
bash-3.2#

bash-3.2# ptime dd if=/dev/zero of=/test/q1 bs=1024k count=65536
65536+0 records in
65536+0 records out

real     1:06.312
user        0.074
sys        54.060
bash-3.2#

Doesn't look like it's CPU bound.



Let's try to read the file after zpool export test; zpool import test

bash-3.2# zpool iostat 1
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
test        64.0G  21.7T     15     46  1.22M  1.76M
test        64.0G  21.7T      0      0      0      0
test        64.0G  21.7T      0      0      0      0
test        64.0G  21.7T  6.64K      0   849M      0
test        64.0G  21.7T  10.2K      0  1.27G      0
test        64.0G  21.7T  10.7K      0  1.33G      0
test        64.0G  21.7T  9.91K      0  1.24G      0
test        64.0G  21.7T  10.1K      0  1.27G      0
test        64.0G  21.7T  10.7K      0  1.33G      0
test        64.0G  21.7T  10.4K      0  1.30G      0
test        64.0G  21.7T  10.6K      0  1.33G      0
test        64.0G  21.7T  10.6K      0  1.33G      0
test        64.0G  21.7T  10.2K      0  1.27G      0
test        64.0G  21.7T  10.3K      0  1.29G      0
test        64.0G  21.7T  10.5K      0  1.31G      0
test        64.0G  21.7T  9.16K      0  1.14G      0
test        64.0G  21.7T  1.98K      0   253M      0
test        64.0G  21.7T  2.48K      0   317M      0
test        64.0G  21.7T  1.98K      0   253M      0
test        64.0G  21.7T  1.98K      0   254M      0
test        64.0G  21.7T  2.23K      0   285M      0
test        64.0G  21.7T  1.73K      0   221M      0
test        64.0G  21.7T  2.23K      0   285M      0
test        64.0G  21.7T  2.23K      0   285M      0
test        64.0G  21.7T  1.49K      0   191M      0
test        64.0G  21.7T  2.47K      0   317M      0
test        64.0G  21.7T  1.46K      0   186M      0
test        64.0G  21.7T  2.01K      0   258M      0
test        64.0G  21.7T  1.98K      0   254M      0
test        64.0G  21.7T  1.97K      0   253M      0
test        64.0G  21.7T  2.23K      0   286M      0
test        64.0G  21.7T  1.98K      0   254M      0
test        64.0G  21.7T  1.73K      0   221M      0
test        64.0G  21.7T  1.98K      0   254M      0
test        64.0G  21.7T  2.42K      0   310M      0
test        64.0G  21.7T  1.78K      0   228M      0
test        64.0G  21.7T  2.23K      0   285M      0
test        64.0G  21.7T  1.67K      0   214M      0
test        64.0G  21.7T  1.80K      0   230M      0
test        64.0G  21.7T  2.23K      0   285M      0
test        64.0G  21.7T  2.47K      0   317M      0
test        64.0G  21.7T  1.73K      0   221M      0
test        64.0G  21.7T  1.99K      0   254M      0
test        64.0G  21.7T  1.24K      0   159M      0
test        64.0G  21.7T  2.47K      0   316M      0
test        64.0G  21.7T  2.47K      0   317M      0
test        64.0G  21.7T  1.99K      0   254M      0
test        64.0G  21.7T  2.23K      0   285M      0
test        64.0G  21.7T  1.73K      0   221M      0
test        64.0G  21.7T  2.48K      0   317M      0
test        64.0G  21.7T  2.48K      0   317M      0
test        64.0G  21.7T  1.49K      0   190M      0
test        64.0G  21.7T  2.23K      0   285M      0
test        64.0G  21.7T  2.23K      0   285M      0
test        64.0G  21.7T  1.81K      0   232M      0
test        64.0G  21.7T  1.90K      0   243M      0
test        64.0G  21.7T  2.48K      0   317M      0
test        64.0G  21.7T  1.49K      0   191M      0
test        64.0G  21.7T  2.47K      0   317M      0
test        64.0G  21.7T  1.99K      0   254M      0
test        64.0G  21.7T  1.97K      0   253M      0
test        64.0G  21.7T  1.49K      0   190M      0
test        64.0G  21.7T  2.23K      0   286M      0
test        64.0G  21.7T  1.82K      0   232M      0
test        64.0G  21.7T  2.15K      0   275M      0
test        64.0G  21.7T  2.22K      0   285M      0
test        64.0G  21.7T  1.73K      0   222M      0
test        64.0G  21.7T  2.23K      0   286M      0
test        64.0G  21.7T  1.90K      0   244M      0
test        64.0G  21.7T  1.81K      0   231M      0
test        64.0G  21.7T  2.23K      0   285M      0
test        64.0G  21.7T  1.97K      0   252M      0
test        64.0G  21.7T  2.00K      0   255M      0
test        64.0G  21.7T  8.42K      0  1.05G      0
test        64.0G  21.7T  10.3K      0  1.29G      0
test        64.0G  21.7T  10.2K      0  1.28G      0
test        64.0G  21.7T  10.4K      0  1.30G      0
test        64.0G  21.7T  10.4K      0  1.30G      0
test        64.0G  21.7T  10.4K      0  1.30G      0
test        64.0G  21.7T  10.2K      0  1.27G      0
test        64.0G  21.7T  10.4K      0  1.30G      0
test        64.0G  21.7T  10.6K      0  1.32G      0
test        64.0G  21.7T  10.5K      0  1.31G      0
test        64.0G  21.7T  10.4K      0  1.30G      0
test        64.0G  21.7T  9.23K      0  1.15G      0
test        64.0G  21.7T  10.5K      0  1.31G      0
test        64.0G  21.7T  10.0K      0  1.25G      0
test        64.0G  21.7T  9.55K      0  1.19G      0
test        64.0G  21.7T  10.2K      0  1.27G      0
test        64.0G  21.7T  10.0K      0  1.25G      0
test        64.0G  21.7T  9.91K      0  1.24G      0
test        64.0G  21.7T  10.6K      0  1.32G      0
test        64.0G  21.7T  9.24K      0  1.15G      0
test        64.0G  21.7T  10.1K      0  1.26G      0
test        64.0G  21.7T  10.3K      0  1.29G      0
test        64.0G  21.7T  10.3K      0  1.29G      0
test        64.0G  21.7T  10.6K      0  1.33G      0
test        64.0G  21.7T  10.6K      0  1.33G      0
test        64.0G  21.7T  8.54K      0  1.07G      0
test        64.0G  21.7T      0      0      0      0
test        64.0G  21.7T      0      0      0      0
test        64.0G  21.7T      0      0      0      0
^C

bash-3.2# ptime dd if=/test/q1 of=/dev/null bs=1024k
65536+0 records in
65536+0 records out

real     1:36.732
user        0.046
sys        48.069
bash-3.2#


Well, that drop for several dozen seconds was interesting...
Lets run it again without export/import:

bash-3.2# zpool iostat 1
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
test        64.0G  21.7T  3.00K      6   384M   271K
test        64.0G  21.7T      0      0      0      0
test        64.0G  21.7T  2.58K      0   330M      0
test        64.0G  21.7T  6.02K      0   771M      0
test        64.0G  21.7T  8.37K      0  1.05G      0
test        64.0G  21.7T  10.4K      0  1.30G      0
test        64.0G  21.7T  10.4K      0  1.30G      0
test        64.0G  21.7T  9.64K      0  1.20G      0
test        64.0G  21.7T  10.5K      0  1.31G      0
test        64.0G  21.7T  10.6K      0  1.32G      0
test        64.0G  21.7T  10.6K      0  1.33G      0
test        64.0G  21.7T  10.4K      0  1.30G      0
test        64.0G  21.7T  9.65K      0  1.21G      0
test        64.0G  21.7T  9.84K      0  1.23G      0
test        64.0G  21.7T  9.22K      0  1.15G      0
test        64.0G  21.7T  10.9K      0  1.36G      0
test        64.0G  21.7T  10.9K      0  1.36G      0
test        64.0G  21.7T  10.4K      0  1.30G      0
test        64.0G  21.7T  10.7K      0  1.34G      0
test        64.0G  21.7T  10.6K      0  1.33G      0
test        64.0G  21.7T  10.9K      0  1.36G      0
test        64.0G  21.7T  10.6K      0  1.32G      0
test        64.0G  21.7T  10.4K      0  1.30G      0
test        64.0G  21.7T  10.7K      0  1.34G      0
test        64.0G  21.7T  10.5K      0  1.32G      0
test        64.0G  21.7T  10.6K      0  1.32G      0
test        64.0G  21.7T  10.8K      0  1.34G      0
test        64.0G  21.7T  10.4K      0  1.29G      0
test        64.0G  21.7T  10.5K      0  1.31G      0
test        64.0G  21.7T  9.15K      0  1.14G      0
test        64.0G  21.7T  10.8K      0  1.35G      0
test        64.0G  21.7T  9.76K      0  1.22G      0
test        64.0G  21.7T  8.67K      0  1.08G      0
test        64.0G  21.7T  10.8K      0  1.36G      0
test        64.0G  21.7T  10.9K      0  1.36G      0
test        64.0G  21.7T  10.3K      0  1.28G      0
test        64.0G  21.7T  9.76K      0  1.22G      0
test        64.0G  21.7T  10.5K      0  1.31G      0
test        64.0G  21.7T  10.6K      0  1.33G      0
test        64.0G  21.7T  9.23K      0  1.15G      0
test        64.0G  21.7T  9.63K      0  1.20G      0
test        64.0G  21.7T  9.79K      0  1.22G      0
test        64.0G  21.7T  10.2K      0  1.28G      0
test        64.0G  21.7T  10.4K      0  1.30G      0
test        64.0G  21.7T  10.3K      0  1.29G      0
test        64.0G  21.7T  10.2K      0  1.28G      0
test        64.0G  21.7T  10.6K      0  1.33G      0
test        64.0G  21.7T  10.8K      0  1.35G      0
test        64.0G  21.7T  10.5K      0  1.32G      0
test        64.0G  21.7T  11.0K      0  1.37G      0
test        64.0G  21.7T  10.2K      0  1.27G      0
test        64.0G  21.7T  9.69K      0  1.21G      0
test        64.0G  21.7T  6.07K      0   777M      0
test        64.0G  21.7T      0      0      0      0
test        64.0G  21.7T      0      0      0      0
test        64.0G  21.7T      0      0      0      0
^C
bash-3.2#

bash-3.2# ptime dd if=/test/q1 of=/dev/null bs=1024k
65536+0 records in
65536+0 records out

real       50.521
user        0.043
sys        48.971
bash-3.2#

Now looks like reading from the pool using single dd is actually CPU
bound.

Reading the same file again and again does produce, more or less,
consistent timing. However every time I export/import the pool during
the first read there is that drop in throughput during first read and
total time increases to almost 100 seconds.... some meta-data? (of
course there are no errors oof any sort, etc.)






>> Reducing zfs_txg_synctime to 1 helps a little bit but still it's not
>> even stream of data.
>>
>> If I start 3 dd streams at the same time then it is slightly better
>> (zfs_txg_synctime set back to 5) but still very jumpy.
>>

RB> Try zfs_txg_synctime to 10; that reduces the txg overhead.


Doesn't help...

[...]
test        13.6G  21.7T      0      0      0      0
test        13.6G  21.7T      0  8.46K      0  1.05G
test        17.6G  21.7T      0  19.3K      0  2.40G
test        17.6G  21.7T      0      0      0      0
test        17.6G  21.7T      0      0      0      0
test        17.6G  21.7T      0  8.04K      0  1022M
test        17.6G  21.7T      0  20.2K      0  2.51G
test        21.6G  21.7T      0     76      0   249K
test        21.6G  21.7T      0      0      0      0
test        21.6G  21.7T      0      0      0      0
test        21.6G  21.7T      0  10.1K      0  1.25G
test        25.6G  21.7T      0  18.6K      0  2.31G
test        25.6G  21.7T      0      0      0      0
test        25.6G  21.7T      0      0      0      0
test        25.6G  21.7T      0  6.34K      0   810M
test        25.6G  21.7T      0  19.9K      0  2.48G
test        29.6G  21.7T      0     88  63.2K   354K
[...]

bash-3.2# ptime dd if=/dev/zero of=/test/q1 bs=1024k count=65536
65536+0 records in
65536+0 records out

real     1:10.074
user        0.074
sys        52.250
bash-3.2#


Increasing it even further (up-to 32s) doesn't help either.

However lowering it to 1s gives:

[...]
test        2.43G  21.7T      0  8.62K      0  1.07G
test        4.46G  21.7T      0  7.23K      0   912M
test        4.46G  21.7T      0    624      0  77.9M
test        6.66G  21.7T      0  10.7K      0  1.33G
test        6.66G  21.7T      0  6.66K      0   850M
test        8.86G  21.7T      0  10.6K      0  1.31G
test        8.86G  21.7T      0  1.96K      0   251M
test        11.2G  21.7T      0  16.5K      0  2.04G
test        11.2G  21.7T      0      0      0      0
test        11.2G  21.7T      0  18.6K      0  2.31G
test        13.5G  21.7T      0     11      0  11.9K
test        13.5G  21.7T      0  2.60K      0   332M
test        13.5G  21.7T      0  19.1K      0  2.37G
test        16.3G  21.7T      0     11      0  11.9K
test        16.3G  21.7T      0  9.61K      0  1.20G
test        18.4G  21.7T      0  7.41K      0   936M
test        18.4G  21.7T      0  11.6K      0  1.45G
test        20.3G  21.7T      0  3.26K      0   407M
test        20.3G  21.7T      0  7.66K      0   977M
test        22.5G  21.7T      0  7.62K      0   963M
test        22.5G  21.7T      0  6.86K      0   875M
test        24.5G  21.7T      0  8.41K      0  1.04G
test        24.5G  21.7T      0  10.4K      0  1.30G
test        26.5G  21.7T      1  2.19K   127K   270M
test        26.5G  21.7T      0      0      0      0
test        26.5G  21.7T      0  4.56K      0   584M
test        28.5G  21.7T      0  11.5K      0  1.42G
[...]

bash-3.2# ptime dd if=/dev/zero of=/test/q1 bs=1024k count=65536
65536+0 records in
65536+0 records out

real     1:09.541
user        0.072
sys        53.421
bash-3.2#



Looks slightly less jumpy but the total real time is about the same so
average throughput is actually the same (about 1GB/s).




>> Reading with one dd produces steady throghput but I'm disapointed with
>> actual performance:
>>

RB> Again, probably cpu bound. What's "ptime dd..." saying ?

You were right here. Reading with single dd seems to be cpu bound.
However multiple streams for reading do not seem to increase
performance considerably.

Nevertheless the main issu is jumpy writing...




-- 
Best regards,
 Robert Milkowski                            mailto:milek@task.gda.pl
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic