[prev in list] [next in list] [prev in thread] [next in thread] 

List:       freebsd-fs
Subject:    Re: ZFS: Parallel I/O to vdevs that appear to be separate physical
From:       Kevin Day <toasty () dragondata ! com>
Date:       2010-10-23 19:06:23
Message-ID: 86C8DC50-9DE0-42B3-8A57-63AB4D095E6D () dragondata ! com
[Download RAW message or body]


On Oct 22, 2010, at 5:52 PM, Eugene M. Kim wrote:

> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Greetings,
> 
> I run a FreeBSD guest in VMware ESXi with a 10GB zpool.  Lately the
> originally provisioned 10GB proved insufficient, and I would like to
> provision another 10GB virtual disk and add it to the zpool as a
> top-level vdev.
> 
> The original and new virtual disks are from the same physical pool (a
> RAID-5 array), but appears to be separate physical disks to the
> FreeBSD guest (da0 and da1).  I am afraid that the ZFS would schedule
> I/O to the two virtual disks in parallel thinking that the pool
> performance will improve, while the performance would actually suffer
> due to seeking back and forth between two regions of one physical pool.


Just to chime in with a bit of practical experience here... It really won't \
measurably matter for most workloads. FreeBSD will schedule I/O separately, but \
VMWare will recombine them on the hypervisor and reschedule them as best it can \
knowing the real hardware layout.

It doesn't work well if you're running two mirrored disks, where your OS might try \
round-robin'ing the requests between what it thinks are two identical drives that \
will seek independently. But, if you've only got one place you can possibly look for \
the data, you really have no choice where you're going to ask to read it from. So the \
OS issues a bunch of requests as needed, and VMWare will reorder them the best it \
can. 

On occasion, where VMWare is connected to a very very large SAN or local storage with \
many (48+) drives, we've even seen small performance increases by giving FreeBSD \
several small disks and using vinum or ccd to stripe between them. If FreeBSD thinks \
there are a half dozen drives, there are times where it will allow more outstanding \
I/O requests at a time, and VMWare can reschedule at its whim.

-- Kevin

_______________________________________________
freebsd-fs@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-fs
To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic