[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ssic-linux-users
Subject:    RE: [SSI-users] Acting as lustre client
From:       "Walker, Bruce J" <bruce.walker () hp ! com>
Date:       2004-09-20 5:20:19
Message-ID: 3689AF909D816446BA505D21F1461AE4C75195 () cacexc04 ! americas ! cpqcorp ! net
[Download RAW message or body]

Cuong,
   Congratulations on getting both the Lustre server up and an
integrated Lustre client with OpenSSI.  What you are seeing is the
expected results at this time.  Let me explain:
In some ways NFS client and Lustre client are the same (both are NAS
(network attached storage) filesystems).  In OpenSSI we modified the NFS
mount code (in the kernel) to distribute it to all the nodes of the
cluster (and made sure nodes that came up later also saw it).  Thus you
only had to do an NFS mount once and all nodes saw it and went directly
to the server.  We have node make the analogous changes to the Lustre
mount code so you have to mount it on each node and because you are
mounting on each node it shows up in mtab multiple times and thus df
shows it multiple times.  As you also noticed, if you don't mount it on
a node, that node doesn't see the filesystem.  If there is enough
interest, David Zafman (who did the NFS modifications) would no doubt do
the Lustre updates.

Bruce




> -----Original Message-----
> From: ssic-linux-users-admin@lists.sourceforge.net 
> [mailto:ssic-linux-users-admin@lists.sourceforge.net] On 
> Behalf Of smee
> Sent: Saturday, September 18, 2004 11:01 PM
> To: ssic-linux-users@lists.sourceforge.net
> Subject: [SSI-users] Acting as lustre client
> 
> 
> Hi all,
> 
> Openssi newbie still feeling my way around. 
> I've got a few machines forming an ssi cluster. I've installed the
> lustre rpms (the modules and lustre-lite (non debug version)). I have
> also successfully configured another set of macines as OSTs and
> MDS...this was done using the rpms downloaded from the Lustre site
> itself .
> 
> When I log into one ssi node and try to mount the lustre storage, it
> mounts successfully and everything works as it should. This was done
> with a command like so:
>    $ lconf --gdb --node client config.xml
> 
> The mount is also seen by other nodes within the cluster, but if i
> write to the lustre mounted storage from the node that performed the
> mount, the other nodes do not see the changes. Likewise, changes done
> on nodes which did not perform the mount are not seen be the node that
> _did_ perform the mount.
> However, if I mount the lustre storage on all nodes, then the changes
> are seen on all nodes and works perfectly as it should.
>    $ onall lconf --gdb --node client config.xml
> 
> Although it works, a "df" command produces duplicate entries for the
> mounted storage.
>    $ df
>    Filesystem           1K-blocks      Used Available Use% Mounted on
>    /dev/1/hda2           72769176    634860  68437760   1% /
>    /dev/1/hda1             101089     18632     77238  20% /boot
>    config                   72769176    634860  68437760   1% 
> /mnt/lustre    
>    config                   72769176    634860  68437760   1% 
> /mnt/lustre    
> 
> 
> My question is: Should I be using "onall lconf..." when working with
> Lustre under openssi?....and should I be concerned about the duplicate
> entries of the "df" command?
> 
> 
> Cheers.
> Cuong.
> 
> 
> -------------------------------------------------------
> This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170
> Project Admins to receive an Apple iPod Mini FREE for your 
> judgement on
> who ports your project to Linux PPC the best. Sponsored by IBM.
> Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php
> _______________________________________________
> Ssic-linux-users mailing list
> Ssic-linux-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ssic-linux-users
> 


-------------------------------------------------------
This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170
Project Admins to receive an Apple iPod Mini FREE for your judgement on
who ports your project to Linux PPC the best. Sponsored by IBM.
Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php
_______________________________________________
Ssic-linux-users mailing list
Ssic-linux-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ssic-linux-users

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic