[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ssic-linux-users
Subject:    Re: [SSI-users] Acting as lustre client
From:       smee <snotmee () gmail ! com>
Date:       2004-09-20 7:25:23
Message-ID: 85efb5fe0409200025629d3cd6 () mail ! gmail ! com
[Download RAW message or body]

hmmm....gmail having trouble sending....
==================


Thanks Bruce.
I can live with having to mount it on each node whilst openssi and
lustre is evolving.

My current problem is getting poor speeds from my e1000 NICs (360mbps
instead of near 900mbps). The poor speeds are affecting my results for
iozone testing on the lustre mount.
If anyone has pointers to docs or sites to help me troubleshoot this,
I would be much obliged.
FYI: 
- using latest linux driver from Intel site (5.4.something)
- tried InterruptThrottleRate=0  : no diff
- tried jumbo frames: locked the NIC
- tried send/recv buffers: no diff
- tried txqueuelen: no diff
- tried various send/recv min/default/max buffer sizes: no diff
- looked at web100 site...they have mainstreamed their efforts into
the linux 2.4.27+ kernel...but I'm having trouble patching and
compiling openssi kernel patches.

Does anyone have an idiot's guide to (specifically) patch and compile
openssi? As I recall, I had problems with fetching from the cvs
sources last week....timeout issues when fetching the kernel rpm. Also
had trouble following the README for compiling. I believe it is out of
date?

Thanks for any tidbit of information.
Cheers,
Cuong.


On Sun, 19 Sep 2004 22:20:19 -0700, Walker, Bruce J <bruce.walker@hp.com> wrote:
> Cuong,
>    Congratulations on getting both the Lustre server up and an
> integrated Lustre client with OpenSSI.  What you are seeing is the
> expected results at this time.  Let me explain:
> In some ways NFS client and Lustre client are the same (both are NAS
> (network attached storage) filesystems).  In OpenSSI we modified the NFS
> mount code (in the kernel) to distribute it to all the nodes of the
> cluster (and made sure nodes that came up later also saw it).  Thus you
> only had to do an NFS mount once and all nodes saw it and went directly
> to the server.  We have node make the analogous changes to the Lustre
> mount code so you have to mount it on each node and because you are
> mounting on each node it shows up in mtab multiple times and thus df
> shows it multiple times.  As you also noticed, if you don't mount it on
> a node, that node doesn't see the filesystem.  If there is enough
> interest, David Zafman (who did the NFS modifications) would no doubt do
> the Lustre updates.
> 
> Bruce
> 
> 
> 
> 
> > -----Original Message-----
> > From: ssic-linux-users-admin@lists.sourceforge.net
> > [mailto:ssic-linux-users-admin@lists.sourceforge.net] On
> > Behalf Of smee
> > Sent: Saturday, September 18, 2004 11:01 PM
> > To: ssic-linux-users@lists.sourceforge.net
> > Subject: [SSI-users] Acting as lustre client
> >
> >
> > Hi all,
> >
> > Openssi newbie still feeling my way around.
> > I've got a few machines forming an ssi cluster. I've installed the
> > lustre rpms (the modules and lustre-lite (non debug version)). I have
> > also successfully configured another set of macines as OSTs and
> > MDS...this was done using the rpms downloaded from the Lustre site
> > itself .
> >
> > When I log into one ssi node and try to mount the lustre storage, it
> > mounts successfully and everything works as it should. This was done
> > with a command like so:
> >    $ lconf --gdb --node client config.xml
> >
> > The mount is also seen by other nodes within the cluster, but if i
> > write to the lustre mounted storage from the node that performed the
> > mount, the other nodes do not see the changes. Likewise, changes done
> > on nodes which did not perform the mount are not seen be the node that
> > _did_ perform the mount.
> > However, if I mount the lustre storage on all nodes, then the changes
> > are seen on all nodes and works perfectly as it should.
> >    $ onall lconf --gdb --node client config.xml
> >
> > Although it works, a "df" command produces duplicate entries for the
> > mounted storage.
> >    $ df
> >    Filesystem           1K-blocks      Used Available Use% Mounted on
> >    /dev/1/hda2           72769176    634860  68437760   1% /
> >    /dev/1/hda1             101089     18632     77238  20% /boot
> >    config                   72769176    634860  68437760   1%
> > /mnt/lustre
> >    config                   72769176    634860  68437760   1%
> > /mnt/lustre
> >
> >
> > My question is: Should I be using "onall lconf..." when working with
> > Lustre under openssi?....and should I be concerned about the duplicate
> > entries of the "df" command?
> >
> >
> > Cheers.
> > Cuong.
> >
> > 
> > -------------------------------------------------------
> > This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170
> > Project Admins to receive an Apple iPod Mini FREE for your
> > judgement on
> > who ports your project to Linux PPC the best. Sponsored by IBM.
> > Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php
> > _______________________________________________
> > Ssic-linux-users mailing list
> > Ssic-linux-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/ssic-linux-users
> >
>


-------------------------------------------------------
This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170
Project Admins to receive an Apple iPod Mini FREE for your judgement on
who ports your project to Linux PPC the best. Sponsored by IBM.
Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php
_______________________________________________
Ssic-linux-users mailing list
Ssic-linux-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ssic-linux-users
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic