--===============1232626780== Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-ZBwlFYP70P4kjhIKvEDU" --=-ZBwlFYP70P4kjhIKvEDU Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Hi, thanks for the answer! I added little comments in-line too. [sorry for this top post] After I wrote the mail last day, I started implementing my ideas. I used lvm.sh as a base, but after some changed I discovered more and more corner cases and issues so in the end it changed quite a lot. I'll write a new mail attaching this new script. I did some tests with various scenarios. For example: All Machines that lost the same PVs; only one machine that lost PVs; removing an LVs, removing the whole VG etc... But to be sure that there aren't problem I have to do more and more tests and also try to document them. Thanks! Bye! On Wed, 2007-10-03 at 09:57 -0500, Jonathan Brassow wrote: > Great stuff! Much of what you are describing I've thought about in =20 > the past, but just haven't had the cycles to work on. You can see in =20 > the script itself, the comments at the top mention the desire to =20 > operate on the VG level. You can also see a couple vg_* functions =20 > that simply return error right now, but were intended to be filled in. >=20 > Comments in-line. >=20 > On Sep 28, 2007, at 11:14 AM, Simone Gotti wrote: >=20 > > Hi, > > > > Trying to use a non cluster vg in redhat cluster I noticed that =20 > > lvm.sh, > > to avoid metadata corruption, is forcing the need of only one lv =20 > > per vg. > > > > I was thinking that other clusters don't have this limitation as they > > let you just use a vg only on one node at a time (and also on one > > service group at a time). > > > > To test if this was possible with lvm2 I made little changes to lvm.sh > > (just variables renames, use of vgchange instead of lvchange for tag > > adding) and using the same changes needed to /etc/lvm/lvm.conf > > (volume_list =3D [ "rootvgname", "@my_hostname" ]) looks like this idea > > was working. > > > > I can activate the vg and all of its volume only on the node with =20 > > the vg > > tagged with its hostname and the start on the other nodes is refused. > > > > Now, will this idea be accepted? If so these are a list of possible > > needed changes and other ideas: > > > > *) Make also unique=3D"1" o= r > > better primary=3D"1" and remove the parameter "name" as only one servic= e > > can use a vg. >=20 > Sounds reasonable. Be careful when using those parameters though, =20 > they often result in cryptic error messages that are tough to =20 > follow. I do checks in lvm.sh where possible to be able to give the =20 > user more information on what went wrong. My idea was that a VG can be owned by a single service. The check you added in lvm.sh forced to a single LV in the VG, but one was able anyway to use multiple lvm.sh with the same vg_name in different services leading to problems anyway. Probably I missed something... :D >=20 > > > > *) What vg_status should do? > > a) Monitor all the LVs > > or > > b) Check only the VG and use ANOTHER resource agent for every lv =20 > > used by > > the cluster? So I can create/remove/modify lvs on that vg that aren't > > under rgmanager control without any error reported by the status > > functions of the lvm.sh agent. > > Also other clusters distinguish between vg and lv and they have 2 > > different agents for them. >=20 > This is were things get difficult. It would be ok to modify lvs on =20 > that vg as long as it's on the same machine that has ownership. Tags =20 > should prevent otherwise, so should be ok. >=20 > User would have to be careful (or barriers would have to prevent) =20 > users from assigning different LVs in the same VG to different =20 > services. Otherwise, if a service fails (application level) and must =20 > be moved to a different machine, we would have to find a way to move =20 > all services associated with the VG to the next machine. I think =20 > there are ways to mandate this (that service A stick with service B), =20 > but we would have to have a way to enforce it. >=20 > > Creating two new agents will also leave the actual lvm.sh without > > changes and keep backward compatibility for who is already using it. > > > > Something like this (lets call lvm_vg and lvm_lv respectively the =20 > > agents > > for the vg and the lv): > > > > > > > > > > > >