[prev in list] [next in list] [prev in thread] [next in thread]
List: veritas-vx
Subject: Re: [Veritas-vx] Relayout volume from 2 to 4 columns
From: "Hudes, Dana" <hudesd () hra ! nyc ! gov>
Date: 2010-01-07 21:25:42
Message-ID: 0CC36EED613AED418A80EE6F44A659DB0C0F0391CA () XCH2 ! windows ! nyc ! hra ! nycnet
[Download RAW message or body]
I'm not real familiar with MvFs but what you describe is the inverse of a zpool. With \
a zpool, I have one volume with multiple filesystems. I can and do have different \
LUNs for the ZFS Intent Log and actual storage. I can and do have one LUN which \
serves the log needs of multiple pools by slicing (since logs only need at most 1/2 \
physical memory it is inefficient to make each one its own LUN; rather, I take the \
LUN and slice it then give a slice to a zpool for its log device). The log portion \
sounds similar to MvFS. What's different is that I can and do have a zpool with \
filesystems for multiple virtual hosts even though the pool is only perhaps 9 devices \
(of whatever size though all the same size). Splitting off metadata is interesting \
and I need to look into the details.
In regards storage speed the latest fashion is taking a FC SAN and a FC array but \
putting SATA II disk drives into special carriers which somehow connect them to the \
FC as fabric devices (no more FC-AL, it's all switched each disk gets its very own \
'loop' whether its a native FC or SATA FC). The SD drives are built as SATA so they \
do the same trick with them.
The burgeoning promise is iSCSI + 10 Gbit Ethernet. We're getting new Juniper \
switches in RSN. Of course, noone has ordered 10Gbit HBA for the existing equipment \
even though it's a supported option (it comes with on-board 1G ports). The promise \
is that iSCSI over 10Ge is faster than FC can go (since the new FC standard, which we \
don't have switches, HBAs or arrays, is 8Gbit). Smells an awful lot like the \
Ethernet vs. Token Ring war of the early 90s (esp. since FC is a token-passing \
architecture) with the difference that people do have FC switches whereas Token Ring \
switches weren't much of a market success.
________________________________
From: William Havey [mailto:bbhavey@gmail.com]
Sent: Thursday, January 07, 2010 3:05 PM
To: Hudes, Dana
Cc: veritas-vx@mailman.eng.auburn.edu
Subject: Re: [Veritas-vx] Relayout volume from 2 to 4 columns
VERITAS Storage Foundation has had the MvFS (multi-volume multi-VxFS) component since \
the SF 5.0 product was introduced in 2006. Send data into one file system, metadata \
in another, file system storage checkpoints into a third, on so on. the volumes can \
be "thin provisioned" to automatically request growth to its pre-configured size
You gotta give tons of credit to hardware scientists, them developing drives of \
truly astonishing capacity (I've recently ordered my first laptop with solid state \
drives, another remarkable achievement). But, size matters for performance only \
slightly. What is being offered is more storage not faster storage (except in the \
vacuum of advertising)
On Thu, Jan 7, 2010 at 1:46 PM, Hudes, Dana \
<hudesd@hra.nyc.gov<mailto:hudesd@hra.nyc.gov>> wrote: Interesting point about \
striping and use of entire disk. So if my LUNs are pieces of disks rather than entire \
disks I lose the benefit of striping? This is more and more common as physical \
drives, even 15K RPM Fiber Channel ones, increase in size. I think the 9990V is using \
320GB drives where the 9980 had 73GB drives.
This brings us to a benefit of the 'many filesystems - one volume' approach of ZFS \
when combined with virtual hosts (containers/non-global-zones). Because I can \
get/justify a larger pool of storage for the entire platform rather than piecemealing \
LUNs on a per-vhost basis, I can get the entire physical device for each column in \
the RAID group. The trouble is that with ZFS I can't peer inside the LUNs the way \
you describe with VxVM. I suppose I could use VxVM + ISP and then use the resulting \
volumes as my devices for ZFS. Once I have a zpool I can parcel out logical storage \
to each non-global zone.
The 1 filesystem - 1 volume VxFs+VxVM approach results in lots of wasted storage in \
our shop as admins have to leave room for growth in each filesystem instead of \
letting them all pull from the one -- but we have to have separate filesystems \
because of demands from application for different mountpoints.
________________________________
From: William Havey [mailto:bbhavey@gmail.com<mailto:bbhavey@gmail.com>]
Sent: Thursday, January 07, 2010 12:22 PM
To: Hudes, Dana
Cc: veritas-vx@mailman.eng.auburn.edu<mailto:veritas-vx@mailman.eng.auburn.edu>
Subject: Re: [Veritas-vx] Relayout volume from 2 to 4 columns
The array stays a high performance machine. Few, if any, features are sacrificed. The \
software expense is reduced tremendously. It is provided on the host by software \
which is needed to link the OS to an array, i.e., a volume management product. \
Hardware vendors providing software functionality becomes the redundant, extra cost \
feature. Total cost of ownership (cost per gigabyte) is reduced by running software \
capable of I/O to any device, not just one specific device.
On the technical side, as prezmol indicates in his latest post, cache is not \
infinite, it will become saturated, de-staging kicks in, and performance analysis and \
improvement includes once again how to make disks work the fastest. To the items \
already cited in the discussion I would add addressing of data blocks to specific \
storage locations. I think this is what striping does best. Striping can be pushed \
through the LUN object to the disk object. Successive I/Os can be sent to either the \
same disk (if within a address range) or to the next disk in the raid group when the \
address is outside the range. Striping across LUNs is beneficial when each device in \
the raid group comprising the LUN spans the entire physical device. The address of \
an I/O includes a LUN and a specific disk within the raid group, i.e., striping over \
stripes
On Wed, Jan 6, 2010 at 1:09 PM, Hudes, Dana \
<hudesd@hra.nyc.gov<mailto:hudesd@hra.nyc.gov>> wrote: So indeed this feature turns \
your $$$$$$$ Hitachi 9990V into a JBOD. Wow. I guess there are products that need \
this sort of thing. And enterprises where the people running the SAN array can't \
manage it properly so the server administrators have need of bypassing them.
The other question of SCSI queues one per column as a benefit of striping is \
interesting. Doesn't this just generate extra disk i/o? The array is already doing \
RAID with 2 parity stripes. Now what? Yet this is effectively what ZFS does so there \
must be a performance gain. Hmm. Multiple SCSI queues actually might possibly make \
sense if you have a large number of CPUs (like the Sun 4v architecture esp. 5240 with \
128 cores or 5440 with 256, or a 4+ board domain on the SunFire 25K which gives you \
32 cores) all of which are running threads that do disk i/o. This benefit seems more \
practicaly in the ZFS approach where you have one volume-equivalent (the zpool is \
both disk group and VM volume in that it has storage layout) and many filesystems so \
you would likely have multiple processes doing independent disk i/o. In the VxVM one \
volume one filesstem model your e.g. Oracle table space is in one filesystem as one \
huge file (possibly other databases are files in other filesystems). Even if you have \
multiple listeners doing their thing ultimately there's one file they're working \
on...of course Oracle has row locking and other paralleization ....hmm.
________________________________
From: William Havey [mailto:bbhavey@gmail.com<mailto:bbhavey@gmail.com>]
Sent: Wednesday, January 06, 2010 12:30 PM
To: Hudes, Dana
Cc: veritas-vx@mailman.eng.auburn.edu<mailto:veritas-vx@mailman.eng.auburn.edu>
Subject: Re: [Veritas-vx] Relayout volume from 2 to 4 columns
Yes, it certainly does. And that is why Symantec put the feature in the VM product; \
to use host-based software to construct and control storage from host to physical \
disk. This would help eliminate multi-vendor chaos in the storage aspects of the data \
center.
On Wed, Jan 6, 2010 at 12:19 PM, Hudes, Dana \
<hudesd@hra.nyc.gov<mailto:hudesd@hra.nyc.gov>> wrote:
> the ISP feature of VM would allow you to drill down to individual spindles and \
> place subdisks on each spindle.
Individual spindles of the RAID group? Doesn't that defeat the purpose of the RAID \
group? Striping across LUNs gets ...interesting; we usually just use them concat. Of \
course that's with a real SAN array such as Hitachi 99x0 or Sun 61x0. I'm not sure I \
see the point of striping LUNs. If you are having performance problems from the \
array, fix the layout of the RAID group on the array: that's why you pay the big \
bucks to Hitachi for their hardware. I not sure I want to know about the load that \
could flatline a RAID-6 array of 6 15K RPM Fiber channel disks backed by a \
multigigabyte RAM cache.
I have certainly seen bad storage layout on the host cause hot spots. That's when \
people make ridiculous numbers of small (gigabyte or so) volumes scattered all over \
the place -- another argument against the old way of doing things with databases and \
raw volumes (if you're going to use raw volumes at least use decent size ones not 2GB \
each ). While old (< 10) Solaris AIO did indeed suck dead bunnies thorugh a straw for \
performance, that's no longer a problem in Solaris 10 ZFS if you use it natively \
(using "branded" zones to run Solaris 8 and 9 puts the old AIO interface in front) \
nor would I expect it to be a problem with VxFS.
________________________________
From: veritas-vx-bounces@mailman.eng.auburn.edu<mailto:veritas-vx-bounces@mailman.eng.auburn.edu> \
[mailto:veritas-vx-bounces@mailman.eng.auburn.edu<mailto:veritas-vx-bounces@mailman.eng.auburn.edu>] \
On Behalf Of William Havey
Sent: Wednesday, January 06, 2010 12:00 PM
To: przemolicc@poczta.fm<mailto:przemolicc@poczta.fm>
Cc: veritas-vx@mailman.eng.auburn.edu<mailto:veritas-vx@mailman.eng.auburn.edu>
Subject: Re: [Veritas-vx] Relayout volume from 2 to 4 columns
VM views the two raid groups as single LUNs. It needn't be concerned with the layout \
of each raid group. To change from 2 columns to 4 columns use the relayout option to \
vxassist and also specify the two new LUNs on which to place the two new columns.
That being said, the ISP feature of VM would allow you to drill down to individual \
spindles and place subdisks on each spindle.
Bill
On Wed, Jan 6, 2010 at 6:36 AM, <przemolicc@poczta.fm<mailto:przemolicc@poczta.fm>> \
wrote: Hello,
we are using VSF 5.0 MP3 on Solaris 10 attached to SAN-based hardware array.
On this array we have created 2 raid groups and on each RG we have created
a few LUNs:
raid group: RG1 RG2
LUN1 LUN7
LUN2 LUN8
LUN3 LUN9
LUN4 LUN10
LUN5 LUN11
LUN6 LUN12
For performance reason some of our volumes are striped between the two raid groups
(using two columns ncol=2) e.g.:
pl <name> <vol> ENABLED ACTIVE 419256320 STRIPE 2/128 RW
In this configuration IOs involves two raid groups.
It seems that in the future in certain cases performance might be not as expected
so we would like to add two additional LUNs (taken from two additional raid groups)
and relayout the whole volume from 2-col to 4-cols e.g.:
raid group: RG1 RG2 RG3 RG4
LUN1 LUN7 LUN13 LUN19
LUN2 LUN8 LUN14 LUN20
LUN3 LUN9 LUN15 LUN21
LUN4 LUN10 LUN16 LUN22
LUN5 LUN11 LUN17 LUN23
LUN6 LUN12 LUN18 LUN24
Is it possible to order relayout of existings volumes to spread it over all four
RGs ? Can I point somehow that it should relayout using these particular LUNs ?
Regards
Przemyslaw Bak (przemol)
--
http://przemol.blogspot.com/
----------------------------------------------------------------------
Milosc, praca, pieniadze.
Sprawdz swoj horoskop na dzis >> http://link.interia.pl/f2531
_______________________________________________
Veritas-vx maillist - \
Veritas-vx@mailman.eng.auburn.edu<mailto:Veritas-vx@mailman.eng.auburn.edu> \
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
_______________________________________________
Veritas-vx maillist - \
Veritas-vx@mailman.eng.auburn.edu<mailto:Veritas-vx@mailman.eng.auburn.edu> \
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
[Attachment #3 (text/html)]
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content="text/html; charset=us-ascii" http-equiv=Content-Type>
<META name=GENERATOR content="MSHTML 8.00.6001.18852"></HEAD>
<BODY>
<DIV dir=ltr align=left><SPAN class=774071421-07012010><FONT color=#0000ff
size=2 face=Arial>I'm not real familiar with MvFs but what you describe is the
inverse of a zpool. With a zpool, I have one volume with multiple filesystems. I
can and do have different LUNs for the ZFS Intent Log and actual storage. I can
and do have one LUN which serves the log needs of multiple pools by slicing
(since logs only need at most 1/2 physical memory it is inefficient to make each
one its own LUN; rather, I take the LUN and slice it then give a slice to a
zpool for its log device). The log portion sounds similar to MvFS. What's
different is that I can and do have a zpool with filesystems for multiple
virtual hosts even though the pool is only perhaps 9 devices (of whatever size
though all the same size). Splitting off metadata is interesting and I need to
look into the details.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=774071421-07012010><FONT color=#0000ff
size=2 face=Arial></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=774071421-07012010><FONT color=#0000ff
size=2 face=Arial>In regards storage speed the latest fashion is taking a FC SAN
and a FC array but putting SATA II disk drives into special carriers which
somehow connect them to the FC as fabric devices (no more FC-AL, it's all
switched each disk gets its very own 'loop' whether its a native FC or SATA
FC). The SD drives are built as SATA so they do the same trick with
them.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=774071421-07012010><FONT color=#0000ff
size=2 face=Arial></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=774071421-07012010><FONT color=#0000ff
size=2 face=Arial>The burgeoning promise is iSCSI + 10 Gbit Ethernet. We're
getting new Juniper switches in RSN. Of course, noone has ordered 10Gbit HBA for
the existing equipment even though it's a supported option (it comes with
on-board 1G ports). The promise is that iSCSI over 10Ge is faster than FC
can go (since the new FC standard, which we don't have switches, HBAs or arrays,
is 8Gbit). Smells an awful lot like the Ethernet vs. Token Ring war of the
early 90s (esp. since FC is a token-passing architecture) with the difference
that people do have FC switches whereas Token Ring switches weren't much of a
market success.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=774071421-07012010></SPAN> </DIV><BR>
<BLOCKQUOTE
style="BORDER-LEFT: #0000ff 2px solid; PADDING-LEFT: 5px; MARGIN-LEFT: 5px; \
MARGIN-RIGHT: 0px"> <DIV dir=ltr lang=en-us class=OutlookMessageHeader align=left>
<HR tabIndex=-1>
<FONT size=2 face=Tahoma><B>From:</B> William Havey [mailto:bbhavey@gmail.com]
<BR><B>Sent:</B> Thursday, January 07, 2010 3:05 PM<BR><B>To:</B> Hudes,
Dana<BR><B>Cc:</B> veritas-vx@mailman.eng.auburn.edu<BR><B>Subject:</B> Re:
[Veritas-vx] Relayout volume from 2 to 4 columns<BR></FONT><BR></DIV>
<DIV></DIV>VERITAS Storage Foundation has had the MvFS (multi-volume
multi-VxFS) component since the SF 5.0 product was introduced in 2006. Send
data into one file system, metadata in another, file system storage
checkpoints into a third, on so on. the volumes can be "thin provisioned" to
automatically request growth to its pre-configured size<BR><BR><BR>You gotta
give tons of credit to hardware scientists, them developing drives of
truly astonishing capacity (I've recently ordered my first laptop with solid
state drives, another remarkable achievement). But, size matters for
performance only slightly. What is being offered is more storage not faster
storage (except in the vacuum of advertising)<BR><BR>
<DIV class=gmail_quote>On Thu, Jan 7, 2010 at 1:46 PM, Hudes, Dana <SPAN
dir=ltr><<A
href="mailto:hudesd@hra.nyc.gov">hudesd@hra.nyc.gov</A>></SPAN> wrote:<BR>
<BLOCKQUOTE
style="BORDER-LEFT: rgb(204,204,204) 1px solid; MARGIN: 0pt 0pt 0pt 0.8ex; \
PADDING-LEFT: 1ex" class=gmail_quote>
<DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2
face=Arial>Interesting point about striping and use of entire disk. So if my
LUNs are pieces of disks rather than entire disks I lose the benefit of
striping? This is more and more common as physical drives, even 15K RPM
Fiber Channel ones, increase in size. I think the 9990V is using 320GB
drives where the 9980 had 73GB drives.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2
face=Arial></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2 face=Arial>This
brings us to a benefit of the 'many filesystems - one volume' approach of
ZFS when combined with virtual hosts (containers/non-global-zones). Because
I can get/justify a larger pool of storage for the entire platform rather
than piecemealing LUNs on a per-vhost basis, I can get the entire physical
device for each column in the RAID group. The trouble is that with ZFS
I can't peer inside the LUNs the way you describe with VxVM. I suppose I
could use VxVM + ISP and then use the resulting volumes as my devices for
ZFS. Once I have a zpool I can parcel out logical storage to each non-global
zone.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2
face=Arial></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2 face=Arial>The 1
filesystem - 1 volume VxFs+VxVM approach results in lots of wasted storage
in our shop as admins have to leave room for growth in each filesystem
instead of letting them all pull from the one -- but we have to have
separate filesystems because of demands from application for different
mountpoints.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2
face=Arial></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2
face=Arial></FONT></SPAN> </DIV>
<DIV dir=ltr align=left>
<HR>
</DIV>
<DIV dir=ltr align=left><FONT size=2 face=Tahoma>
<DIV class=im><B>From:</B> William Havey [mailto:<A
href="mailto:bbhavey@gmail.com" target=_blank>bbhavey@gmail.com</A>]
<BR></DIV><B>Sent:</B> Thursday, January 07, 2010 12:22 PM
<DIV>
<DIV></DIV>
<DIV class=h5><BR><B>To:</B> Hudes, Dana<BR><B>Cc:</B> <A
href="mailto:veritas-vx@mailman.eng.auburn.edu"
target=_blank>veritas-vx@mailman.eng.auburn.edu</A><BR><B>Subject:</B> Re:
[Veritas-vx] Relayout volume from 2 to 4
columns<BR></DIV></DIV></FONT><BR></DIV>
<DIV>
<DIV></DIV>
<DIV class=h5>
<BLOCKQUOTE
style="BORDER-LEFT: rgb(0,0,255) 2px solid; PADDING-LEFT: 5px; MARGIN-LEFT: 5px; \
MARGIN-RIGHT: 0px"> <DIV></DIV>The array stays a high performance machine. Few, if \
any,
features are sacrificed. The software expense is reduced tremendously. It
is provided on the host by software which is needed to link the OS to an
array, i.e., a volume management product. Hardware vendors providing
software functionality becomes the redundant, extra cost feature. Total
cost of ownership (cost per gigabyte) is reduced by running software
capable of I/O to any device, not just one specific device. <BR><BR>On the
technical side, as prezmol indicates in his latest post, cache is not
infinite, it will become saturated, de-staging kicks in, and performance
analysis and improvement includes once again how to make disks work the
fastest. To the items already cited in the discussion I would add
addressing of data blocks to specific storage locations. I think this is
what striping does best. Striping can be pushed through the LUN object to
the disk object. Successive I/Os can be sent to either the same disk (if
within a address range) or to the next disk in the raid group when the
address is outside the range. Striping across LUNs is beneficial when each
device in the raid group comprising the LUN spans the entire physical
device. The address of an I/O includes a LUN and a specific disk
within the raid group, i.e., striping over stripes<BR><BR>
<DIV class=gmail_quote>On Wed, Jan 6, 2010 at 1:09 PM, Hudes, Dana <SPAN
dir=ltr><<A href="mailto:hudesd@hra.nyc.gov"
target=_blank>hudesd@hra.nyc.gov</A>></SPAN> wrote:<BR>
<BLOCKQUOTE
style="BORDER-LEFT: rgb(204,204,204) 1px solid; MARGIN: 0pt 0pt 0pt 0.8ex; \
PADDING-LEFT: 1ex" class=gmail_quote>
<DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2 face=Arial>So
indeed this feature turns your $$$$$$$ Hitachi 9990V into a
JBOD.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2 face=Arial>Wow.
I guess there are products that need this sort of thing.
</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2 face=Arial>And
enterprises where the people running the SAN array can't manage it
properly so the server administrators have need of bypassing
them.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2
face=Arial></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2 face=Arial>The
other question of SCSI queues one per column as a benefit of striping is
interesting. </FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2
face=Arial>Doesn't this just generate extra disk i/o? The array is
already doing RAID with 2 parity stripes. Now what?</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2 face=Arial>Yet
this is effectively what ZFS does so there must be a performance
gain. Hmm. Multiple SCSI queues actually might possibly make sense
if you have a large number of CPUs (like the Sun 4v architecture esp.
5240 with 128 cores or 5440 with 256, or a 4+ board domain on the
SunFire 25K which gives you 32 cores) all of which are running threads
that do disk i/o. </FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2 face=Arial>This
benefit seems more practicaly in the ZFS approach where you have one
volume-equivalent (the zpool is both disk group and VM volume in that it
has storage layout) and many filesystems so you would likely have
multiple processes doing independent disk i/o. In the VxVM one
volume one filesstem model your e.g. Oracle table space is in one
filesystem as one huge file (possibly other databases are files in
other filesystems). Even if you have multiple listeners doing their
thing ultimately there's one file they're working on...of course Oracle
has row locking and other paralleization ....hmm.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2
face=Arial></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2
face=Arial></FONT></SPAN> </DIV><BR>
<BLOCKQUOTE
style="BORDER-LEFT: rgb(0,0,255) 2px solid; PADDING-LEFT: 5px; MARGIN-LEFT: \
5px; MARGIN-RIGHT: 0px"> <DIV dir=ltr lang=en-us align=left>
<HR>
<FONT size=2 face=Tahoma><B>From:</B> William Havey [mailto:<A
href="mailto:bbhavey@gmail.com" target=_blank>bbhavey@gmail.com</A>]
<BR><B>Sent:</B> Wednesday, January 06, 2010 12:30 PM<BR><B>To:</B>
Hudes, Dana
<DIV>
<DIV></DIV>
<DIV><BR><B>Cc:</B> <A href="mailto:veritas-vx@mailman.eng.auburn.edu"
target=_blank>veritas-vx@mailman.eng.auburn.edu</A><BR><B>Subject:</B>
Re: [Veritas-vx] Relayout volume from 2 to 4
columns<BR></DIV></DIV></FONT><BR></DIV>
<DIV>
<DIV></DIV>
<DIV>
<DIV></DIV>Yes, it certainly does. And that is why Symantec put the
feature in the VM product; to use host-based software to construct and
control storage from host to physical disk. This would help eliminate
multi-vendor chaos in the storage aspects of the data center.<BR><BR>
<DIV class=gmail_quote>On Wed, Jan 6, 2010 at 12:19 PM, Hudes, Dana
<SPAN dir=ltr><<A href="mailto:hudesd@hra.nyc.gov"
target=_blank>hudesd@hra.nyc.gov</A>></SPAN> wrote:<BR>
<BLOCKQUOTE
style="BORDER-LEFT: rgb(204,204,204) 1px solid; MARGIN: 0pt 0pt 0pt 0.8ex; \
PADDING-LEFT: 1ex" class=gmail_quote>
<DIV>
<DIV>
<DIV dir=ltr align=left><SPAN>></SPAN>the ISP feature of VM would
allow you to drill down to individual spindles and place subdisks on
each spindle.</DIV>
<DIV dir=ltr align=left> </DIV></DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2
face=Arial>Individual spindles of the RAID group? Doesn't that
defeat the purpose of the RAID group?</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2
face=Arial>Striping across LUNs gets ...interesting; we usually just
use them concat. Of course that's with a real SAN array such as
Hitachi 99x0 or Sun 61x0.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2
face=Arial>I'm not sure I see the point of striping LUNs. If you are
having performance problems from the array, fix the layout of the
RAID group on the array: that's why you pay the big bucks to Hitachi
for their hardware. I not sure I want to know about the load that
could flatline a RAID-6 array of 6 15K RPM Fiber channel disks
backed by a multigigabyte RAM cache.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2
face=Arial></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN><FONT color=#0000ff size=2
face=Arial>I have certainly seen bad storage layout on the host
cause hot spots. That's when people make ridiculous numbers of small
(gigabyte or so) volumes scattered all over the place -- another
argument against the old way of doing things with databases and raw
volumes (if you're going to use raw volumes at least use decent size
ones not 2GB each ). While old (< 10) Solaris AIO did indeed suck
dead bunnies thorugh a straw for performance, that's no longer a
problem in Solaris 10 ZFS if you use it natively (using "branded"
zones to run Solaris 8 and 9 puts the old AIO interface in front)
nor would I expect it to be a problem with VxFS.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN></SPAN> </DIV><FONT color=#0000ff
size=2 face=Arial></FONT><BR>
<BLOCKQUOTE
style="BORDER-LEFT: rgb(0,0,255) 2px solid; PADDING-LEFT: 5px; \
MARGIN-LEFT: 5px; MARGIN-RIGHT: 0px"> <DIV dir=ltr lang=en-us align=left>
<HR>
<FONT size=2 face=Tahoma><B>From:</B> <A
href="mailto:veritas-vx-bounces@mailman.eng.auburn.edu"
target=_blank>veritas-vx-bounces@mailman.eng.auburn.edu</A>
[mailto:<A href="mailto:veritas-vx-bounces@mailman.eng.auburn.edu"
target=_blank>veritas-vx-bounces@mailman.eng.auburn.edu</A>] <B>On
Behalf Of </B>William Havey<BR><B>Sent:</B> Wednesday, January 06,
2010 12:00 PM<BR><B>To:</B> <A href="mailto:przemolicc@poczta.fm"
target=_blank>przemolicc@poczta.fm</A><BR><B>Cc:</B> <A
href="mailto:veritas-vx@mailman.eng.auburn.edu"
target=_blank>veritas-vx@mailman.eng.auburn.edu</A><BR><B>Subject:</B>
Re: [Veritas-vx] Relayout volume from 2 to 4
columns<BR></FONT><BR></DIV>
<DIV>
<DIV></DIV>
<DIV>
<DIV></DIV>VM views the two raid groups as single LUNs. It needn't
be concerned with the layout of each raid group. To change from 2
columns to 4 columns use the relayout option to vxassist and also
specify the two new LUNs on which to place the two new
columns.<BR><BR>That being said, the ISP feature of VM would allow
you to drill down to individual spindles and place subdisks on
each spindle.<BR><BR>Bill<BR><BR>
<DIV class=gmail_quote>On Wed, Jan 6, 2010 at 6:36 AM, <SPAN
dir=ltr><<A href="mailto:przemolicc@poczta.fm"
target=_blank>przemolicc@poczta.fm</A>></SPAN> wrote:<BR>
<BLOCKQUOTE
style="BORDER-LEFT: rgb(204,204,204) 1px solid; MARGIN: 0pt 0pt 0pt \
0.8ex; PADDING-LEFT: 1ex" class=gmail_quote>Hello,<BR><BR>we are using VSF 5.0 MP3 \
on
Solaris 10 attached to SAN-based hardware array.<BR>On this
array we have created 2 raid groups and on each RG we have
created<BR>a few LUNs:<BR><BR>raid group: RG1
RG2<BR>
LUN1 LUN7<BR>
LUN2 LUN8<BR>
LUN3 LUN9<BR>
LUN4 LUN10<BR>
LUN5 LUN11<BR>
LUN6
LUN12<BR><BR>For performance reason some of our volumes are
striped between the two raid groups<BR>(using two columns
ncol=2) e.g.:<BR><BR>pl <name> <vol> ENABLED ACTIVE
419256320 STRIPE 2/128 RW<BR><BR>In this configuration IOs
involves two raid groups.<BR><BR>It seems that in the future in
certain cases performance might be not as expected<BR>so we
would like to add two additional LUNs (taken from two additional
raid groups)<BR>and relayout the whole volume from 2-col to
4-cols e.g.:<BR><BR>raid group: RG1 RG2
RG3 RG4<BR>
LUN1 LUN7 LUN13
LUN19<BR>
LUN2 LUN8 LUN14 LUN20<BR>
LUN3 LUN9 LUN15
LUN21<BR>
LUN4 LUN10 LUN16 LUN22<BR>
LUN5 LUN11 LUN17
LUN23<BR>
LUN6 LUN12 LUN18 LUN24<BR><BR>Is it
possible to order relayout of existings volumes to spread it
over all four<BR>RGs ? Can I point somehow that it should
relayout using these particular LUNs
?<BR><BR><BR>Regards<BR>Przemyslaw Bak (przemol)<BR>--<BR><A
href="http://przemol.blogspot.com/"
target=_blank>http://przemol.blogspot.com/</A><BR><BR><BR><BR><BR><BR> \
<BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><BR><B \
R><BR><BR>----------------------------------------------------------------------<BR>Milosc, \
praca, pieniadze.<BR>Sprawdz swoj horoskop na dzis >> <A
href="http://link.interia.pl/f2531"
target=_blank>http://link.interia.pl/f2531</A><BR><BR><BR>_______________________________________________<BR>Veritas-vx \
maillist - <A
href="mailto:Veritas-vx@mailman.eng.auburn.edu"
target=_blank>Veritas-vx@mailman.eng.auburn.edu</A><BR><A
href="http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx"
target=_blank>http://mailman.eng.auburn.edu/mailman/listinfo/veritas-v \
x</A><BR></BLOCKQUOTE></DIV><BR></DIV></DIV></BLOCKQUOTE></DIV><BR>_______________________________________________<BR>Veritas-vx \
maillist - <A
href="mailto:Veritas-vx@mailman.eng.auburn.edu"
target=_blank>Veritas-vx@mailman.eng.auburn.edu</A><BR><A
href="http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx"
target=_blank>http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx</A \
><BR><BR></BLOCKQUOTE></DIV><BR></DIV></DIV></BLOCKQUOTE></DIV></BLOCKQUOTE></DIV><BR> \
> </BLOCKQUOTE></DIV></DIV></DIV></BLOCKQUOTE></DIV><BR></BLOCKQUOTE></BODY></HTML>
_______________________________________________
Veritas-vx maillist - Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
--===============4742394472541171407==--
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic