[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-ha
Subject:    root gfs over fibrechannel
From:       Alan Robertson <alanr () suse ! com>
Date:       2000-05-31 7:30:55
[Download RAW message or body]

Hi,

Matt O'Keefe asked me to post this for him...

	-- Alan Robertson
	   alanr@suse.com

----- Forwarded message from Ken Preslan <kpreslan@zarniwoop.com> -----

Date: Mon, 29 May 2000 15:01:51 -0500
From: Ken Preslan <kpreslan@zarniwoop.com>
To: EXT / DEVOTEAM VAROQUI Christophe <ext.devoteam.varoqui@sncf.fr>
Cc: "'gfs-devel@borg.umn.edu'" <gfs-devel@borg.umn.edu>
Subject: Re: root gfs over fibrechannel
X-Mailer: Mutt 0.95.4us
In-Reply-To: <A979D299E02AD31188EA0000834A03D21558C9@S64P17BIA40>; from
EXT / DEVOTEAM VAROQUI Christophe on Mon, May 29, 2000 at 03:01:03PM
+0200
Precedence: bulk

On Mon, May 29, 2000 at 03:01:03PM +0200, EXT / DEVOTEAM VAROQUI
Christophe wrote:
> could someone comment on the possibility of having the root fs on a gfs
> volume on a fibrechannel JBOD ?

This is definitely something we're interested in.

As far as GFS itself is concerned, this works fine.  We've had GFS up
and
running (as root, with nolock) on internal SCSI and IDE disks.

There are two tricky parts, though.  One is getting LILO to read the
kernel
off the Pool.  LILO gets confused when you try to tell it to read the
kernel
off of a block device that isn't recognized by BIOS.  (The same problem
also exists for LVM and MD volumes.)

The other is that in order for the lock modules to work, the network has
to
be up before GFS can mount the root filesystem.  GFS needs to be able to
acquire locks that protect the superblock, root inode, etc... as it
boots up.

The easiest way to solve these two problems (in the short term) is using
initial ramdisk feature of LILO.  The kernel boots up off of an internal
disk in the machine.  LILO loads in a ramdisk that contains the files
needed
to assemble the Pool and setup the network.  It then changes the root
filesystem to the GFS filesystem you wanted as root.  All the rest of
them
machine's configuration information can then come from the share FS.

Obviously this isn't the best way of doing things in the long term. 
Some sort
of RARP/BOOTP/DHCP/whatever setup so that the machine gets its config
and 
kernel from the network is preferable, but we don't have that capability
just yet.

> Is there restrictions on HBAs to use ?

If you load the ramdisk from a local disk (or the network), no.  

You can load the ramdisk off of a non-Pool partition of a shared disk,
but
you have to make sure that the HBA will always assign the same disk to
the
same BIOS disk number.  (I don't know much about how the different HBAs
handle that.)

> What is the status of context sensitive links in gfs ? (in order to share a
> single root fs between nodes of a cluster)

They work.  I described their use here:

http://www.globalfilesystem.org/mlists/gfs-devel/msg00794.html

> (any pointer to revelant documentation would be greatly appreciated :)
> 
> Could someone clarify this one too :
> The drivers for QL2200 and Interphase HBA are said to transport SCSI and IP
> : Can they transport these protocols simultaneously or is it a choice to be
> made "probe-time"

I think you can do both on the Interphase.  I believe we've done that
before.   

I don't think the IP drivers for the Qlogic card are ready/stable yet.

-- 
Ken Preslan <kpreslan@zarniwoop.com>

------------------------------------------------------------------------------
Linux HA Web Site:
  http://linux-ha.org/
Linux HA HOWTO:
  http://metalab.unc.edu/pub/Linux/ALPHA/linux-ha/High-Availability-HOWTO.html
------------------------------------------------------------------------------

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic