[prev in list] [next in list] [prev in thread] [next in thread]
List: ssic-linux-devel
Subject: [SSI-devel] RE: [SSI-users] Re: openSSI webView 0.1 released!
From: "Walker, Bruce J" <bruce.walker () hp ! com>
Date: 2004-11-02 15:10:38
Message-ID: 3689AF909D816446BA505D21F1461AE4C7525A () cacexc04 ! americas ! cpqcorp ! net
[Download RAW message or body]
Xavier,
To clarify, CFS does have caching. It doesn't have "ondisk caching", which is what \
you propose. I instead would suggest using DRBD to provide a consistent version of \
the root on two or more nodes (RH is also working on enhancements to CLVM to provide \
cross-node mirroring as well). According the drbd web site the 2.4.x version is now \
stable so perhaps we should retry it, even before we go to the 2.6 kernel.
Bruce
> -----Original Message-----
> From: ssic-linux-users-admin@lists.sourceforge.net
> [mailto:ssic-linux-users-admin@lists.sourceforge.net] On
> Behalf Of Kilian CAVALOTTI
> Sent: Tuesday, November 02, 2004 4:03 AM
> To: En Chiang Lee
> Cc: Karl Vogel; 'kilian.cavalotti@stix.polytechnique.fr';
> Watson, Brian J.; ssic-linux-users@lists.sourceforge.net;
> ssic-linux-devel@lists.sourceforge.net
> Subject: Re: [SSI-users] Re: openSSI webView 0.1 released!
>
>
> On Tuesday 02 November 2004 11:56, En Chiang Lee wrote:
> > > If you have shared disk hardware, the initnode can go down without
> > > the whole cluster going down.
> >
> > The cluster will fail only when *all* the initnodes go down.
>
> I know that. But with shared hardware, your init nodes will
> very likely be
> located in the same physical area, while your other nodes could be
> wherever. So, in case of power (or network) outage, both of
> your init nodes
> will be unaccessible from the rest of the cluster, then, you
> know what. :)
>
> Well, all I wanted to say is that init nodes have a special
> status on a
> openSSI cluster, at least without a mean to dynamically
> ditribute the root
> filesystem all over the cluster.
>
> (Below are pure suggestions, with absolutly no idea of realizability)
> I think I've read CFS was based on NFS: can't we imagine a
> version of CFS
> with file caching, like AFS or CodaFS? This way, 'init nodes'
> would become
> 'booting nodes', and we could approach the real and perfect
> peer-to-peer
> cluster infrasctructure, with no special status for any node.
> You probably
> already know CacheFS, it can perhaps be useful in this case:
> http://www.kerneltraffic.org/kernel-traffic/kt20041030_281.html#2
>
> --
> Kilian CAVALOTTI Ingénieur
> Systèmes & Réseaux
> Laboratoire STIX École
> Polytechnique
> F91128 Palaiseau Tel : +33
> 1 69 33 41 13
>
>
> -------------------------------------------------------
> This SF.Net email is sponsored by:
> Sybase ASE Linux Express Edition - download now for FREE
> LinuxWorld Reader's Choice Award Winner for best database on Linux.
> http://ads.osdn.com/?ad_idU88&alloc_id065&op=ick
> _______________________________________________
> Ssic-linux-users mailing list
> Ssic-linux-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ssic-linux-users
>
-------------------------------------------------------
This SF.Net email is sponsored by:
Sybase ASE Linux Express Edition - download now for FREE
LinuxWorld Reader's Choice Award Winner for best database on Linux.
http://ads.osdn.com/?ad_idU88&alloc_id065&opÌk
_______________________________________________
ssic-linux-devel mailing list
ssic-linux-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ssic-linux-devel
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic