[prev in list] [next in list] [prev in thread] [next in thread] 

List:       veritas-ha
Subject:    AW: [Veritas-ha] Basic NIC failover question
From:       "Nachbaur Hubert" <hubert.nachbaur () viamat ! com>
Date:       2005-05-20 6:09:51
Message-ID: 707B8F8DB38DED45AC6A063582B0F7288A0322 () chbsl01ex011 ! ch ! viamat ! com
[Download RAW message or body]

Hi Jose,

The bond0 interface has one ip address defined in ifcfg-bond0 and in the SGs we have \
the NIC- and IP resources which define the virtual ip addresses. As I said before, \
the bond0 interface is completely transparent for VCS.

IP nbu_ip (
                Device = bond0
                Address = "x.x.x.x"
                NetMask = "255.255.0.0"
                Options = "broadcast x.x.x.x"
                )

NIC nbu_nic (
                Device = bond0
                NetworkHosts = { "x.x.x.x", "x.x.x.x" }
                )

If you unplug both eth0 and eth1 the NIC resource will fail (that's the normal and \
correct behaviour). But if you only unplug one of eth0 or eth1, VCS won't recognice \
that event at all. For testing see the output of "dmesg", it's very helpfull during \
configuration. 


Regards,

Hubert

-----Ursprüngliche Nachricht-----
Von: Jose Goyana [mailto:jose.goyana@cobra.com.br] 
Gesendet: Donnerstag, 19. Mai 2005 17:32
An: Nachbaur Hubert
Betreff: Re: [Veritas-ha] Basic NIC failover question

Hi Nachbaur!

    Are you associating a "floating" IP address with bond0? Does it work 
fine when you put this resource online? Does it work fine when both eth0 and 
eth1 are unpluged?
    Great thanks!

Jose

----- Original Message ----- 
From: "Nachbaur Hubert" <hubert.nachbaur@viamat.com>
To: "Jose Goyana" <jose.goyana@cobra.com.br>; 
<veritas-ha@mailman.eng.auburn.edu>
Sent: Thursday, May 19, 2005 11:23 AM
Subject: AW: [Veritas-ha] Basic NIC failover question


* PGP Bad Signature, Signed: 05/19/2005 at 04:32PM
Hi Jose,

We are also using VCS 4.0 MP1 with RHEL3 Update2 and our servers use the 
bond driver for teaming (active/active). The following changes are required 
to use the bond driver (eth0 and eth1 are the teaming devices, eth2 and eth3 
are for the heartbeat):

- /etc/modules.conf:
alias eth0 e1000
alias eth1 e1000
alias eth2 tg3
alias eth3 tg3
alias bond0 bonding
options bond0 miimon=100 mode=2

- /etc/sysconfig/network-scripts/ifcfg-eth[01]:
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
PEERDNS=no
TYPE=Ethernet
MASTER=bond0
SLAVE=yes

- /etc/sysconfig/network-scripts/ifcfg-bond0:
DEVICE=bond0
ONBOOT=yes
USERCTL=no
TYPE=Ethernet
MTU=""
NETMASK=x.x.x.x
BOOTPROTO=none
IPADDR=x.x.x.x
BROADCAST=x.x.x.x
GATEWAY=x.x.x.x
NETWORK=x.x.x.x

This solution is completely transparent to VCS (you can even use a low-pri 
LLT heartbeart link over this bond device!) and works like a charm for us 
(don't forget to configure the switch for teaming).


Hope this helps a little bit,

Hubert

-----Ursprüngliche Nachricht-----
Von: veritas-ha-admin@mailman.eng.auburn.edu 
[mailto:veritas-ha-admin@mailman.eng.auburn.edu] Im Auftrag von Jose Goyana
Gesendet: Donnerstag, 19. Mai 2005 15:39
An: Luis Payano; 'Jim Senicka'; 'Dietsch, Nathan'; 
veritas-ha@mailman.eng.auburn.edu
Betreff: Re: [Veritas-ha] Basic NIC failover question

Hi!

    I forgot to mention but I'm using VCS 4.0 MP1 for Linux RedHat. It has
no MultiNICB agent, only MultiNICA...
    RedHat has a Solaris' IPMP-like NIC failover feature, called "bond" but
it requires the OS to put all interfaces belonging to the "bonding" to be up
at boot-time, before VCS tries to use it. When I tried to associate an IP
agent to the bond interface, the resource failed when I put it online...
    The other problem is related to the fact I must have 2 service groups on
the same cluster (all of them are able to run on any of the 2 servers, even
all of them on the same server) but these groups must share the pair of
public NICs. VCS Bundled Agents Reference Guide recomends setting up a
MultiNICA resource on one group and a PROXY resource on the other one, to
avoid redundant monitoring of the same resources. My question is: as long as
I must say the remote machine on the PROXY agent's setup, what would be the
best approach to do this? Should I ignore the manual and create another
MultiNICA, pointing to the same physical resources?

    Great thanks!

Jose

PS: 2 MultiNICA agents pointing to the same NICs seem to be working fine, on
this test I've made so far!

----- Original Message ----- 
From: "Luis Payano" <payanol@corp.earthlink.net>
To: "'Jim Senicka'" <jsenicka@veritas.com>; "'Dietsch, Nathan'"
<Nathan.Dietsch@team.telstra.com>; "'Jose Goyana'"
<jose.goyana@cobra.com.br>; <veritas-ha@mailman.eng.auburn.edu>
Sent: Thursday, May 19, 2005 10:23 AM
Subject: RE: [Veritas-ha] Basic NIC failover question


> 
> 
> Agreed.  IPMultinicB is faster and supports additional features, but it
> does not support Plumbing interfaces or active/passive configuration of
> physical interfaces.
> 
> See Page 55,56
> http://ftp.support.veritas.com/pub/support/products/ClusterServer_UNIX/2
> 66736.pdf
> 
> 
> 
> Luis Payano
> 
> 

_______________________________________________
Veritas-ha maillist  -  Veritas-ha@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-ha

* Nachbaur Hubert <hubert.nachbaur@viamat.com>
* 0x357D338D (L)




_______________________________________________
Veritas-ha maillist  -  Veritas-ha@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-ha


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic