[prev in list] [next in list] [prev in thread] [next in thread] 

List:       zfs-discuss
Subject:    Re: [zfs-discuss] Troubleshooting dedup performance
From:       Brent Jones <brent () servuhome ! net>
Date:       2009-12-29 3:25:18
Message-ID: ee9f3b480912281925gd0b664dm1a8eb96a6a0e973a () mail ! gmail ! com
[Download RAW message or body]

On Mon, Dec 28, 2009 at 5:46 PM, James Dickens <jamesd.wi@gmail.com> wrote:
>
> here is what i see before prefetch_disable is set, i'm currently moving (mv
> /tank/games /tank/fs1 /tank/fs2)  .5 GB and larger files from a deduped pool
> to another... file copy seems fine but delete's kill performance. b130 OSOL
> /dev
>
>    0     0     0     6     0      7     0     2    88     1   116 zfs
>     0     0     0     2     0      4     0     0     0     1   116 zfs
>     0     0     0     0     0      0     0     0     0     1   116 zfs
>     0     0     0     6     0     14     0     0     0     1   116 zfs
>     0     0     0     0     0      0     0     0     0     1   116 zfs
>     0     0     0     0     0      0     0     0     0     1   116 zfs
>     0     0     0     0     0      1     0     0     0     4 24.0M zfs
>     0     0     0     0     0      0     0     0     0     3 16.0M zfs
>     0     0     0     0     0      0     0     0     0     1   116 zfs
>     0     0     0     0     0     18     0     0     0     1   116 zfs
>  new  name   name  attr  attr lookup rddir  read read  write write
>  file remov  chng   get   set    ops   ops   ops bytes   ops bytes
>     0     0     0     0     0      0     0     0     0     1   116 zfs
>     0     0     0     0     0      0     0     0     0     1   260 zfs
>     0     0     0     0     0      0     0     0     0     1   116 zfs
>     0     0     0     0     0      0     0     0     0     1   116 zfs
>     0     0     0     0     0      0     0     0     0     1   116 zfs
>     0     0     0     0     0      0     0     0     0     4 24.0M zfs
>     0     0     0     2     0      4     0     0     0     4 24.0M zfs
>     0     0     0     0     0      2     0     0     0     1   116 zfs
>
> with it enabled i see a more consistent result, but probably not any
> faster.
>  new  name   name  attr  attr lookup rddir  read read  write write
>  file remov  chng   get   set    ops   ops   ops bytes   ops bytes
>     0     0     0     0     0      0     0     0     0     2 8.00M zfs
>     0     0     0     0     0      0     0     0     0     1   260 zfs
>     0     0     0     0     0      0     0     0     0     1   116 zfs
>     0     0     0     0     0      0     0     0     0     1   116 zfs
>     0     0     0     0     0      0     0     0     0     2 8.00M zfs
>     0     0     0     6     0      7     0     2    88     2 8.00M zfs
>     0     0     0     2     0      4     0     0     0     1   116 zfs
>     0     0     0     0     0      0     0     0     0     2 8.00M zfs
>     0     0     0     0     0      0     0     0     0     1   116 zfs
>     0     0     0     0     0      0     0     0     0     2 8.00M zfs
>     0     0     0     0     0      1     0     0     0     1   116 zfs
>     0     0     0     0     0      3     0     0     0     2 8.00M zfs
>     0     0     0     0     0      0     0     0     0     2 8.00M zfs
>  new  name   name  attr  attr lookup rddir  read read  write write
>  file remov  chng   get   set    ops   ops   ops bytes   ops bytes
>     0     0     0     0     0      0     0     0     0     1   116 zfs
>     0     0     0     0     0      0     0     0     0     2 8.00M zfs
>     0     0     0     0     0      0     0     0     0     2 8.00M zfs
>     1     0     0     5     0      5     0     2  9.9K     1   116 zfs
>     0     0     0     0     0      3     0     0     0     1   116 zfs
>     0     0     0     4     0      7     2     0     0     2 8.00M zfs
>
>
> James Dickens
>
> On Thu, Dec 24, 2009 at 11:22 PM, Michael Herf <mbherf@gmail.com> wrote:
>>
>> FWIW, I just disabled prefetch, and my dedup + zfs recv seems to be
>> running visibly faster (somewhere around 3-5x faster).
>>
>> echo zfs_prefetch_disable/W0t1 | mdb -kw
>>
>> Anyone else see a result like this?
>>
>> I'm using the "read" bandwidth from the sending pool from "zpool
>> iostat -x 5" to estimate transfer rate, since I assume the write rate
>> would be lower when dedup is working.
>>
>> mike
>>
>> p.s. Note to set it back to the default behavior:
>> echo zfs_prefetch_disable/W0t0 | mdb -kw
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>

In my own testing, the de-dupe code may not be mature enough to enter
production (hence, it being in /dev still :)
My X4540's during testing performed so terribly, I saw as low as 50
bytes/sec read/write with de-dupe enabled.
These are beasts of systems, fully loaded, but unable to even sustain
1KB/sec at times, most notably during ZFS send/recv, delete, and
destroying filesystems and snapshots.

Even with de-dupe turned off, if you had blocks that had been
de-duped, that file system will always be slow. I found I had to
completely destroy a file system once de-dupe had been enabled, then
re-create the file system to restore the previously high performance.

A bit of a let down, so I will wait on the sidelines for this feature to mature.


-- 
Brent Jones
brent@servuhome.net
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic