[prev in list] [next in list] [prev in thread] [next in thread]
List: linux-ha
Subject: Re: [Linux-HA] LVM usage?
From: Kirk Ismay <captain () netidea ! com>
Date: 2006-11-30 21:41:26
Message-ID: 456F5006.4070304 () netidea ! com
[Download RAW message or body]
John Hearns wrote:
> A couple of Linux-HA questions tonight.
> Bear with me please!
>
> I'm setting up a two-node active/passive cluster for use in an HPC
> cluster.
> There is a shared SCSI storage array.
> The primary head node runs as an NFS server, and I'm arranging for the
> NFS server to fail over. So far so good.
> Using text-mode haresources (yes, I know), I have:
>
> master IPaddr2::192.168.1.254/16/eth0:0 \
> Filesystem::/dev/vg01/users::/users::xfs \
> Filesystem::/dev/vg02/data1::/data1::xfs \
> nfsserver \
> sgemaster
>
I'm working on building a similar configuration, though my goal is to
deploy Active-Active with each front end getting a separate RAID
volume. I'm fairly new at this myself, but I'll share what I've learned
so far.
In my case I needed to use the fsid= flag in /etc/exports to prevent the
Stale NFS filehandle problem I was having.
> SO LVM is being used on the master node, and the filesystems are
> mounting OK.
> Question is - how do I use the LVM script to enable/disable the volumes?
> OR am I misunderstanding?
> This is a SuSE system, and LVM is started on both the master and
> secondary nodes at boot time, by /etc/init.d/boot.lvm (??)
> So the same columes are available on each server, ie. I can log in to
> secondary server and mount the volume 'by hand'
>
> Is it that I should 'switch off' LVM at boot time (/etc/sysconfig flag
> ??) The use resource.d/LVM to enable the volumes at heartbeat start ???
I think all that needs to be done is for you to add LVM to your
haresources like so:
master IPaddr2::192.168.1.254/16/eth0:0 \
LVM::/dev/vg01 LVM::/dev/vg02 \
Filesystem::/dev/vg01/users::/users::xfs \
Filesystem::/dev/vg02/data1::/data1::xfs \
nfsserver \
sgemaster
From what I can see the heartbeat LVM script marks the volume inactive
so that while it is visible to lvdisplay -C, you can't actually mount
it. When I do an lvscan on my system, the volume assigned to the other
node shows as inactive:
# lvscan
ACTIVE '/dev/storage2/home2' [100.00 GB] inherit
inactive '/dev/storage1/home1' [100.00 GB] inherit
Heartbeat will run 'LVM /dev/vg01 stop' on your secondary node on
startup, marking the volume inactive until a failover event occurs.
This will keep you from being able to mount the volume by hand.
> Also is the resource.d/SCSI script advisable?
> I am a bit wary of it. Also how do you get the LUNs/IDs of specific
> disks? Yes, I know it is in /proc, but the recipe?
In my case, using the LinuxSCSI script wound up causing my kernel to
mark the volume as dead, and I couldn't remount the volume again without
a reboot, so I'm not using it at present.
There is an lsscsi utility (in my case available as a Debian package)
that will show you the LUN information. The output format is identical
to what the LinuxSCSI resource script wants:
[0:0:0:0] disk SEAGATE ST336607LC 0007 -
[0:0:1:0] disk SEAGATE ST336607LC 0007 -
[1:0:0:0] disk SC U320/ SATA16R R0.0 -
[1:0:0:1] disk SC U320/ SATA16R R0.0 -
[1:0:0:2] disk SC U320/ SATA16R R0.0 -
From that, you can add LinuxSCSI::1.0.0.1 or whatever to your
haresources between the IP and LVM.
Do you have the NFS server working so far?
Hope that helps.
Sincerely,
Kirk Ismay
System Administrator
--
Net Idea
201-625 Front Street Nelson, BC V1L 4B6
P:250-352-3512 | F:250-352-9780 | TF:1-888-352-3512
10 Years of Service Excellence!
Check out our brand new website! www.netidea.com
_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic