[prev in list] [next in list] [prev in thread] [next in thread] 

List:       vbox-dev
Subject:    [vbox-dev] disk i/o optimization info
From:       Huihong Luo <huisinro () yahoo ! com>
Date:       2009-04-09 16:12:42
Message-ID: 624382.46492.qm () web34303 ! mail ! mud ! yahoo ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


Hi,
=A0
I'd like to understand more on how carefully vbox virtual disk is designed =
to achieve high performance. Can someone shed some lights?=A0 (I know i cou=
ld digg into the source code)
=A0
These are some of=A0my rough ideas, pls correct me:
=A0
(1) fixed look up time. When disk i/o arrives, it will first look up the gr=
ain table directory, grain tables, and then perform the real i/o. So the wo=
rst scenario, one disk i/o from vm will cause 2 (or 3) disk i/o on host. Th=
e lookup time is fixed and fast. For writes, it might need one more i/o to =
save the grain table.
=A0
(2) lookup cache. I could imagine some lookup entries must be cached in mem=
ory to avoid reloading from disk. If so, how large is such memory being all=
ocated?
=A0
(3) disk sector cache? Any caching mechasim for the real sectors fetched? P=
erhaps, you use the file API for disk access, so the host OS might have cac=
hed the disk sectors.
=A0
(4) BIOS access to virtual disk. On boot time, bios access disk in emulatio=
n mode, anything special is done to accelerate disk i/o? Is it possible raw=
 disk (directly r/w to a hd or parition) boots slowly than .vdi or .vmdk fi=
les? Because there isn't much to be done for raw disk.
=A0
(5) "Preserving" Disk scheduling algorithm. These algorithms are operated a=
ssuming a real hard disk, however, since vbox uses virtual disks, would the=
se algorithms still work? For example, on virtual disk, originally contiguo=
us may be no longer so, sectors with smaller offsets might be actually lie =
at the end, etc. Anything special is done on this area?
=A0
Thanks,
=A0
Huihong
=A0
[Attachment #5 (text/html)]

<table cellspacing="0" cellpadding="0" border="0" ><tr><td valign="top" style="font: \
inherit;"><DIV>Hi,</DIV> <DIV>&nbsp;</DIV>
<DIV>I'd like to understand more on how carefully vbox virtual disk is designed to \
achieve high performance. Can someone shed some lights?&nbsp; (I know i could digg \
into the source code)</DIV> <DIV>&nbsp;</DIV>
<DIV>These are some of&nbsp;my rough ideas, pls correct me:</DIV>
<DIV>&nbsp;</DIV>
<DIV>(1) fixed look up time. When disk i/o arrives, it will first look up the grain \
table directory, grain tables, and then perform the real i/o. So the worst scenario, \
one disk i/o from vm will cause 2 (or 3) disk i/o on host. The lookup time is fixed \
and fast. For writes, it might need one more i/o to save the grain table.</DIV> \
<DIV>&nbsp;</DIV> <DIV>(2) lookup cache. I could imagine some lookup entries must be \
cached in memory to avoid reloading from disk. If so, how large is such memory being \
allocated?</DIV> <DIV>&nbsp;</DIV>
<DIV>(3) disk sector cache? Any caching mechasim for the real sectors fetched? \
Perhaps, you use the file API for disk access, so the host OS might have cached the \
disk sectors.</DIV> <DIV>&nbsp;</DIV>
<DIV>(4) BIOS access to virtual disk. On boot time, bios access disk in emulation \
mode, anything special is done to accelerate disk i/o? Is it possible raw disk \
(directly r/w to a hd or parition) boots slowly than .vdi or .vmdk files? Because \
there isn't much to be done for raw disk.</DIV> <DIV>&nbsp;</DIV>
<DIV>(5) "Preserving" Disk scheduling algorithm. These algorithms are operated \
assuming a real hard disk, however, since vbox uses virtual disks, would these \
algorithms still work? For example, on virtual disk, originally contiguous may be no \
longer so, sectors with smaller offsets might be actually lie at the end, etc. \
Anything special is done on this area?</DIV> <DIV>&nbsp;</DIV>
<DIV>Thanks,</DIV>
<DIV>&nbsp;</DIV>
<DIV>Huihong</DIV>
<DIV>&nbsp;</DIV></td></tr></table>



_______________________________________________
vbox-dev mailing list
vbox-dev@virtualbox.org
http://vbox.innotek.de/mailman/listinfo/vbox-dev


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic