[prev in list] [next in list] [prev in thread] [next in thread] 

List:       redhat-linux-cluster
Subject:    Re: [Linux-cluster] Linux-cluster Digest, Vol 73, Issue 15
From:       parshuram prasad <parshu001 () gmail ! com>
Date:       2010-05-18 5:40:56
Message-ID: AANLkTilOp8-ZHTefEZn4CCqZIAEqtLyQRYixBP-4vfDm () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


please send me cluster script . i want to create two node clustering on
linux 5.3

thx
parshuram


On Sat, May 15, 2010 at 6:57 PM, <linux-cluster-request@redhat.com> wrote:

> Send Linux-cluster mailing list submissions to
>        linux-cluster@redhat.com
>
> To subscribe or unsubscribe via the World Wide Web, visit
>        https://www.redhat.com/mailman/listinfo/linux-cluster
> or, via email, send a message with subject or body 'help' to
>        linux-cluster-request@redhat.com
>
> You can reach the person managing the list at
>        linux-cluster-owner@redhat.com
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Linux-cluster digest..."
>
>
> Today's Topics:
>
>   1. GFS on Debian Lenny (Brent Clark)
>   2. pull plug on node, service never relocates (Dusty)
>   3. Re: GFS on Debian Lenny (Joao Ferreira gmail)
>   4. Re: pull plug on node, service never relocates (Corey Kovacs)
>   5. Re: pull plug on node, service never relocates (Kit Gerrits)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 14 May 2010 20:26:46 +0200
> From: Brent Clark <brentgclarklist@gmail.com>
> To: linux clustering <linux-cluster@redhat.com>
> Subject: [Linux-cluster] GFS on Debian Lenny
> Message-ID: <4BED95E6.4040006@gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Hiya
>
> Im trying to get GFS working on Debian Lenny. Unfortuantely
> documentation seems to be non existent. And the one site that google
> recommends, gcharriere.com, is down.
>
> I used googles caching mechanism to try and make head and tails of whats
> needed to be done, but unfortunately Im unsuccessful.
>
> Would anyone have any documentation or any sites or if you have a heart,
> provide a howto to get GFS working.
>
>  From myside, all ive done is:
>
> aptitude install gfs2-tools
> modprobe gfs2
> gfs_mkfs -p lock_dlm -t lolcats:drbdtest /dev/drbd0 -j 2
>
> thats all, ive done. No editting of configs etc.
>
> When I try,
>
> mount -t gfs2 /dev/drbd0 /drbd/
>
> I get the following message:
>
> /sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
> /sbin/mount.gfs2: gfs_controld not running
> /sbin/mount.gfs2: error mounting lockproto lock_dlm
>
> If anyone can help, it would be appreciated.
>
> Kind Regards
> Brent Clark
>
>
>
> ------------------------------
>
> Message: 2
> Date: Fri, 14 May 2010 14:45:11 -0500
> From: Dusty <dhoffutt@gmail.com>
> To: Linux-cluster@redhat.com
> Subject: [Linux-cluster] pull plug on node, service never relocates
> Message-ID:
>        <AANLkTil1ssNgEYRs71I_xmsLV3enagF76kEQYAt-Tdse@mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Greetings,
>
> Using stock "clustering" and "cluster-storage" from RHEL5 update 4 X86_64
> ISO.
>
> As an example using my below config:
>
> Node1 is running service1, node2 is running service2, etc, etc, node5 is
> spare and available for the relocation of any failover domain / cluster
> service.
>
> If I go into the APC PDU and turn off the electrical port to node1, node2
> will fence node1 (going into the APC PDU and doing and off, on on node1's
> port), this is fine. Works well. When node1 comes back up, then it shuts
> down service1 and service1 relocates to node5.
>
> Now if I go in the lab and literally pull the plug on node5 running
> service1, another node fences node5 via the APC - can check the APC PDU log
> and see that it has done an off/on on node5's electrical port just fine.
>
> But I pulled the plug on node5 - resetting the power doesn't matter. I want
> to simulate a completely dead node, and have the service relocate in this
> case of complete node failure.
>
> In this RHEL5.4 cluster, the service never relocates. I can similate this
> on
> any node for any service. What if a node's motherboard fries?
>
> What can I set to have the remaining nodes stop waiting for the reboot of a
> failed node and just go ahead and relocate the cluster service that had
> been
> running on the now failed node?
>
> Thank you!
>
> versions:
>
> cman-2.0.115-1.el5
> openais-0.80.6-8.el5
> modcluster-0.12.1-2.el5
> lvm2-cluster-2.02.46-8.el5
> rgmanager-2.0.52-1.el5
> ricci-0.12.2-6.el5
>
> cluster.conf (sanitized, real scripts removed, all gfs2 mounts gone for
> clarity):
> <?xml version="1.0"?>
> <cluster config_version="1"
> name="alderaanDefenseShieldRebelAllianceCluster">
>    <fence_daemon clean_start="0" post_fail_delay="3" post_join_delay="60"/>
>    <clusternodes>
>        <clusternode name="192.168.1.1" nodeid="1" votes="1">
>            <fence>
>                <method name="1">
>                    <device name="apc_pdu" port="1" switch="1"/>
>                </method>
>            </fence>
>        </clusternode>
>        <clusternode name="192.168.1.2" nodeid="2" votes="1">
>            <fence>
>                <method name="1">
>                    <device name="apc_pdu" port="2" switch="1"/>
>                </method>
>            </fence>
>        </clusternode>
>        <clusternode name="192.168.1.3" nodeid="3" votes="1">
>            <fence>
>                <method name="1">
>                    <device name="apc_pdu" port="3" switch="1"/>
>                </method>
>            </fence>
>        </clusternode>
>        <clusternode name="192.168.1.4" nodeid="4" votes="1">
>            <fence>
>                <method name="1">
>                    <device name="apc_pdu" port="4" switch="1"/>
>                </method>
>            </fence>
>        </clusternode>
>        <clusternode name="192.168.1.5" nodeid="5" votes="1">
>            <fence>
>                <method name="1">
>                    <device name="apc_pdu" port="5" switch="1"/>
>                </method>
>            </fence>
>        </clusternode>
>    </clusternodes>
>    <cman expected_votes="6"/>
>    <fencedevices>
>        <fencedevice agent="fence_apc" ipaddr="192.168.1.20" login="device"
> name="apc_pdu" passwd="wonderwomanWasAPrettyCoolSuperhero"/>
>    </fencedevices>
>    <rm>
>        <failoverdomains>
>            <failoverdomain name="fd1" nofailback="0" ordered="1"
> restricted="1">
>                <failoverdomainnode name="192.168.1.1" priority="1"/>
>                <failoverdomainnode name="192.168.1.2" priority="2"/>
>                <failoverdomainnode name="192.168.1.3" priority="3"/>
>                <failoverdomainnode name="192.168.1.4" priority="4"/>
>                <failoverdomainnode name="192.168.1.5" priority="5"/>
>            </failoverdomain>
>            <failoverdomain name="fd2" nofailback="0" ordered="1"
> restricted="1">
>                <failoverdomainnode name="192.168.1.1" priority="5"/>
>                <failoverdomainnode name="192.168.1.2" priority="1"/>
>                <failoverdomainnode name="192.168.1.3" priority="2"/>
>                <failoverdomainnode name="192.168.1.4" priority="3"/>
>                <failoverdomainnode name="192.168.1.5" priority="4"/>
>            </failoverdomain>
>            <failoverdomain name="fd3" nofailback="0" ordered="1"
> restricted="1">
>                <failoverdomainnode name="192.168.1.1" priority="4"/>
>                <failoverdomainnode name="192.168.1.2" priority="5"/>
>                <failoverdomainnode name="192.168.1.3" priority="1"/>
>                <failoverdomainnode name="192.168.1.4" priority="2"/>
>                <failoverdomainnode name="192.168.1.5" priority="3"/>
>            </failoverdomain>
>            <failoverdomain name="fd4" nofailback="0" ordered="1"
> restricted="1">
>                <failoverdomainnode name="192.168.1.1" priority="3"/>
>                <failoverdomainnode name="192.168.1.2" priority="4"/>
>                <failoverdomainnode name="192.168.1.3" priority="5"/>
>                <failoverdomainnode name="192.168.1.4" priority="1"/>
>                <failoverdomainnode name="192.168.1.5" priority="2"/>
>            </failoverdomain>
>        </failoverdomains>
>        <resources>
>            <ip address="10.1.1.1" monitor_link="1"/>
>            <ip address="10.1.1.2" monitor_link="1"/>
>            <ip address="10.1.1.3" monitor_link="1"/>
>            <ip address="10.1.1.4" monitor_link="1"/>
>            <ip address="10.1.1.5" monitor_link="1"/>
>            <script file="/usr/local/bin/service1" name="service1"/>
>            <script file="/usr/local/bin/service2" name="service2"/>
>            <script file="/usr/local/bin/service3" name="service3"/>
>            <script file="/usr/local/bin/service4" name="service4"/>
>       </resources>
>        <service autostart="1" domain="fd1" exclusive="1" name="service1"
> recovery="relocate">
>            <ip ref="10.1.1.1"/>
>            <script ref="service1"/>
>        </service>
>        <service autostart="1" domain="fd2" exclusive="1" name="service2"
> recovery="relocate">
>            <ip ref="10.1.1.2"/>
>            <script ref="service2"/>
>        </service>
>        <service autostart="1" domain="fd3" exclusive="1" name="service3"
> recovery="relocate">
>            <ip ref="10.1.1.3"/>
>            <script ref="service3"/>
>        </service>
>        <service autostart="1" domain="fd4" exclusive="1" name="service4"
> recovery="relocate">
>            <ip ref="10.1.1.4"/>
>            <script ref="service4"/>
>        </service>
>    </rm>
> </cluster>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://www.redhat.com/archives/linux-cluster/attachments/20100514/c892bf86/attachment.html
> >
>
> ------------------------------
>
> Message: 3
> Date: Fri, 14 May 2010 23:31:41 +0100
> From: Joao Ferreira gmail <joao.miguel.c.ferreira@gmail.com>
> To: linux clustering <linux-cluster@redhat.com>
> Subject: Re: [Linux-cluster] GFS on Debian Lenny
> Message-ID: <1273876301.5298.1.camel@debj5n.critical.pt>
> Content-Type: text/plain
>
> Have you checked the docs at the drbd site ?
>
> it contains some short info regarding usage of gsf over drbd..
>
> http://www.drbd.org/docs/applications/
>
> cheers
> Joao
>
> On Fri, 2010-05-14 at 20:26 +0200, Brent Clark wrote:
> > Hiya
> >
> > Im trying to get GFS working on Debian Lenny. Unfortuantely
> > documentation seems to be non existent. And the one site that google
> > recommends, gcharriere.com, is down.
> >
> > I used googles caching mechanism to try and make head and tails of whats
> > needed to be done, but unfortunately Im unsuccessful.
> >
> > Would anyone have any documentation or any sites or if you have a heart,
> > provide a howto to get GFS working.
> >
> >  From myside, all ive done is:
> >
> > aptitude install gfs2-tools
> > modprobe gfs2
> > gfs_mkfs -p lock_dlm -t lolcats:drbdtest /dev/drbd0 -j 2
> >
> > thats all, ive done. No editting of configs etc.
> >
> > When I try,
> >
> > mount -t gfs2 /dev/drbd0 /drbd/
> >
> > I get the following message:
> >
> > /sbin/mount.gfs2: can't connect to gfs_controld: Connection refused
> > /sbin/mount.gfs2: gfs_controld not running
> > /sbin/mount.gfs2: error mounting lockproto lock_dlm
> >
> > If anyone can help, it would be appreciated.
> >
> > Kind Regards
> > Brent Clark
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster@redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
>
> ------------------------------
>
> Message: 4
> Date: Sat, 15 May 2010 04:59:23 +0100
> From: Corey Kovacs <corey.kovacs@gmail.com>
> To: linux clustering <linux-cluster@redhat.com>
> Subject: Re: [Linux-cluster] pull plug on node, service never
>        relocates
> Message-ID:
>        <AANLkTinYVvrit1oPb76TfLa9vmp1AMHcGI3eoZALHxrJ@mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> What happens when you do ...
>
> fence_node 192.168.1.4
>
> from any of the other nodes?
>
> if that doesn't work, then fencing is not configured correctly and you
> should try to invoke the fence agent directly.
> Also, it would help if you included the APC model and firmware rev.
> The fence_apc agent can be finicky about such things.
>
>
> Hope this helps.
>
> -Core
>
> On Fri, May 14, 2010 at 8:45 PM, Dusty <dhoffutt@gmail.com> wrote:
> > Greetings,
> >
> > Using stock "clustering" and "cluster-storage" from RHEL5 update 4 X86_64
> > ISO.
> >
> > As an example using my below config:
> >
> > Node1 is running service1, node2 is running service2, etc, etc, node5 is
> > spare and available for the relocation of any failover domain / cluster
> > service.
> >
> > If I go into the APC PDU and turn off the electrical port to node1, node2
> > will fence node1 (going into the APC PDU and doing and off, on on node1's
> > port), this is fine. Works well. When node1 comes back up, then it shuts
> > down service1 and service1 relocates to node5.
> >
> > Now if I go in the lab and literally pull the plug on node5 running
> > service1, another node fences node5 via the APC - can check the APC PDU
> log
> > and see that it has done an off/on on node5's electrical port just fine.
> >
> > But I pulled the plug on node5 - resetting the power doesn't matter. I
> want
> > to simulate a completely dead node, and have the service relocate in this
> > case of complete node failure.
> >
> > In this RHEL5.4 cluster, the service never relocates. I can similate this
> on
> > any node for any service. What if a node's motherboard fries?
> >
> > What can I set to have the remaining nodes stop waiting for the reboot of
> a
> > failed node and just go ahead and relocate the cluster service that had
> been
> > running on the now failed node?
> >
> > Thank you!
> >
> > versions:
> >
> > cman-2.0.115-1.el5
> > openais-0.80.6-8.el5
> > modcluster-0.12.1-2.el5
> > lvm2-cluster-2.02.46-8.el5
> > rgmanager-2.0.52-1.el5
> > ricci-0.12.2-6.el5
> >
> > cluster.conf (sanitized, real scripts removed, all gfs2 mounts gone for
> > clarity):
> > <?xml version="1.0"?>
> > <cluster config_version="1"
> > name="alderaanDefenseShieldRebelAllianceCluster">
> > ??? <fence_daemon clean_start="0" post_fail_delay="3"
> post_join_delay="60"/>
> > ??? <clusternodes>
> > ??????? <clusternode name="192.168.1.1" nodeid="1" votes="1">
> > ??????????? <fence>
> > ??????????????? <method name="1">
> > ??????????????????? <device name="apc_pdu" port="1" switch="1"/>
> > ??????????????? </method>
> > ??????????? </fence>
> > ??????? </clusternode>
> > ??????? <clusternode name="192.168.1.2" nodeid="2" votes="1">
> > ??????????? <fence>
> > ??????????????? <method name="1">
> > ??????????????????? <device name="apc_pdu" port="2" switch="1"/>
> > ??????????????? </method>
> > ??????????? </fence>
> > ??????? </clusternode>
> > ??????? <clusternode name="192.168.1.3" nodeid="3" votes="1">
> > ??????????? <fence>
> > ??????????????? <method name="1">
> > ??????????????????? <device name="apc_pdu" port="3" switch="1"/>
> > ??????????????? </method>
> > ??????????? </fence>
> > ??????? </clusternode>
> > ??????? <clusternode name="192.168.1.4" nodeid="4" votes="1">
> > ??????????? <fence>
> > ??????????????? <method name="1">
> > ??????????????????? <device name="apc_pdu" port="4" switch="1"/>
> > ??????????????? </method>
> > ??????????? </fence>
> > ??????? </clusternode>
> > ??????? <clusternode name="192.168.1.5" nodeid="5" votes="1">
> > ??????????? <fence>
> > ??????????????? <method name="1">
> > ??????????????????? <device name="apc_pdu" port="5" switch="1"/>
> > ??????????????? </method>
> > ??????????? </fence>
> > ??????? </clusternode>
> > ??? </clusternodes>
> > ??? <cman expected_votes="6"/>
> > ??? <fencedevices>
> > ??????? <fencedevice agent="fence_apc" ipaddr="192.168.1.20"
> login="device"
> > name="apc_pdu" passwd="wonderwomanWasAPrettyCoolSuperhero"/>
> > ??? </fencedevices>
> > ??? <rm>
> > ??????? <failoverdomains>
> > ??????????? <failoverdomain name="fd1" nofailback="0" ordered="1"
> > restricted="1">
> > ??????????????? <failoverdomainnode name="192.168.1.1" priority="1"/>
> > ??????????????? <failoverdomainnode name="192.168.1.2" priority="2"/>
> > ??????????????? <failoverdomainnode name="192.168.1.3" priority="3"/>
> > ??????????????? <failoverdomainnode name="192.168.1.4" priority="4"/>
> > ??????????????? <failoverdomainnode name="192.168.1.5" priority="5"/>
> > ??????????? </failoverdomain>
> > ??????????? <failoverdomain name="fd2" nofailback="0" ordered="1"
> > restricted="1">
> > ??????????????? <failoverdomainnode name="192.168.1.1" priority="5"/>
> > ??????????????? <failoverdomainnode name="192.168.1.2" priority="1"/>
> > ??????????????? <failoverdomainnode name="192.168.1.3" priority="2"/>
> > ??????????????? <failoverdomainnode name="192.168.1.4" priority="3"/>
> > ??????????????? <failoverdomainnode name="192.168.1.5" priority="4"/>
> > ??????????? </failoverdomain>
> > ??????????? <failoverdomain name="fd3" nofailback="0" ordered="1"
> > restricted="1">
> > ??????????????? <failoverdomainnode name="192.168.1.1" priority="4"/>
> > ??????????????? <failoverdomainnode name="192.168.1.2" priority="5"/>
> > ??????????????? <failoverdomainnode name="192.168.1.3" priority="1"/>
> > ??????????????? <failoverdomainnode name="192.168.1.4" priority="2"/>
> > ??????????????? <failoverdomainnode name="192.168.1.5" priority="3"/>
> > ??????????? </failoverdomain>
> > ??????????? <failoverdomain name="fd4" nofailback="0" ordered="1"
> > restricted="1">
> > ??????????????? <failoverdomainnode name="192.168.1.1" priority="3"/>
> > ??????????????? <failoverdomainnode name="192.168.1.2" priority="4"/>
> > ??????????????? <failoverdomainnode name="192.168.1.3" priority="5"/>
> > ??????????????? <failoverdomainnode name="192.168.1.4" priority="1"/>
> > ??????????????? <failoverdomainnode name="192.168.1.5" priority="2"/>
> > ??????????? </failoverdomain>
> > ??????? </failoverdomains>
> > ??????? <resources>
> > ??????????? <ip address="10.1.1.1" monitor_link="1"/>
> > ??????????? <ip address="10.1.1.2" monitor_link="1"/>
> > ??????????? <ip address="10.1.1.3" monitor_link="1"/>
> > ??????????? <ip address="10.1.1.4" monitor_link="1"/>
> > ??????????? <ip address="10.1.1.5" monitor_link="1"/>
> > ??????????? <script file="/usr/local/bin/service1" name="service1"/>
> > ??????????? <script file="/usr/local/bin/service2" name="service2"/>
> > ??????????? <script file="/usr/local/bin/service3" name="service3"/>
> > ??????????? <script file="/usr/local/bin/service4" name="service4"/>
> > ?????? </resources>
> > ??????? <service autostart="1" domain="fd1" exclusive="1" name="service1"
> > recovery="relocate">
> > ??????????? <ip ref="10.1.1.1"/>
> > ??????????? <script ref="service1"/>
> > ??????? </service>
> > ??????? <service autostart="1" domain="fd2" exclusive="1" name="service2"
> > recovery="relocate">
> > ??????????? <ip ref="10.1.1.2"/>
> > ??????????? <script ref="service2"/>
> > ??????? </service>
> > ??????? <service autostart="1" domain="fd3" exclusive="1" name="service3"
> > recovery="relocate">
> > ??????????? <ip ref="10.1.1.3"/>
> > ??????????? <script ref="service3"/>
> > ??????? </service>
> > ??????? <service autostart="1" domain="fd4" exclusive="1" name="service4"
> > recovery="relocate">
> > ??????????? <ip ref="10.1.1.4"/>
> > ??????????? <script ref="service4"/>
> > ??????? </service>
> > ??? </rm>
> > </cluster>
> >
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster@redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
>
>
>
> ------------------------------
>
> Message: 5
> Date: Sat, 15 May 2010 15:26:49 +0200
> From: "Kit Gerrits" <kitgerrits@gmail.com>
> To: "'linux clustering'" <linux-cluster@redhat.com>
> Subject: Re: [Linux-cluster] pull plug on node, service never
>        relocates
> Message-ID: <4beea118.1067f10a.4a1f.ffff8975@mx.google.com>
> Content-Type: text/plain; charset="us-ascii"
>
>
> Hello,
>
> You might want to check the syslog to see if the cluster has noticed the
> outage and what is has tried to do about it.
> You can also check the node status via 'cman nodes' (explanaation of states
> in the cman manpage).
> Does the server have another power source, by any chance?
>  (if not make sure you DO have dual power supplies. These things die Often)
>
>
> Regards,
>
> Kit
>
>  _____
>
> From: linux-cluster-bounces@redhat.com
> [mailto:linux-cluster-bounces@redhat.com] On Behalf Of Dusty
> Sent: vrijdag 14 mei 2010 21:45
> To: Linux-cluster@redhat.com
> Subject: [Linux-cluster] pull plug on node, service never relocates
>
>
> Greetings,
>
> Using stock "clustering" and "cluster-storage" from RHEL5 update 4 X86_64
> ISO.
>
> As an example using my below config:
>
> Node1 is running service1, node2 is running service2, etc, etc, node5 is
> spare and available for the relocation of any failover domain / cluster
> service.
>
> If I go into the APC PDU and turn off the electrical port to node1, node2
> will fence node1 (going into the APC PDU and doing and off, on on node1's
> port), this is fine. Works well. When node1 comes back up, then it shuts
> down service1 and service1 relocates to node5.
>
> Now if I go in the lab and literally pull the plug on node5 running
> service1, another node fences node5 via the APC - can check the APC PDU log
> and see that it has done an off/on on node5's electrical port just fine.
>
> But I pulled the plug on node5 - resetting the power doesn't matter. I want
> to simulate a completely dead node, and have the service relocate in this
> case of complete node failure.
>
> In this RHEL5.4 cluster, the service never relocates. I can similate this
> on
> any node for any service. What if a node's motherboard fries?
>
> What can I set to have the remaining nodes stop waiting for the reboot of a
> failed node and just go ahead and relocate the cluster service that had
> been
> running on the now failed node?
>
> Thank you!
>
> versions:
>
> cman-2.0.115-1.el5
> openais-0.80.6-8.el5
> modcluster-0.12.1-2.el5
> lvm2-cluster-2.02.46-8.el5
> rgmanager-2.0.52-1.el5
> ricci-0.12.2-6.el5
>
> cluster.conf (sanitized, real scripts removed, all gfs2 mounts gone for
> clarity):
> <?xml version="1.0"?>
> <cluster config_version="1"
> name="alderaanDefenseShieldRebelAllianceCluster">
>    <fence_daemon clean_start="0" post_fail_delay="3" post_join_delay="60"/>
>    <clusternodes>
>        <clusternode name="192.168.1.1" nodeid="1" votes="1">
>            <fence>
>                <method name="1">
>                    <device name="apc_pdu" port="1" switch="1"/>
>                </method>
>            </fence>
>        </clusternode>
>        <clusternode name="192.168.1.2" nodeid="2" votes="1">
>            <fence>
>                <method name="1">
>                    <device name="apc_pdu" port="2" switch="1"/>
>                </method>
>            </fence>
>        </clusternode>
>        <clusternode name="192.168.1.3" nodeid="3" votes="1">
>            <fence>
>                <method name="1">
>                    <device name="apc_pdu" port="3" switch="1"/>
>                </method>
>            </fence>
>        </clusternode>
>        <clusternode name="192.168.1.4" nodeid="4" votes="1">
>            <fence>
>                <method name="1">
>                    <device name="apc_pdu" port="4" switch="1"/>
>                </method>
>            </fence>
>        </clusternode>
>        <clusternode name="192.168.1.5" nodeid="5" votes="1">
>            <fence>
>                <method name="1">
>                    <device name="apc_pdu" port="5" switch="1"/>
>                </method>
>            </fence>
>        </clusternode>
>    </clusternodes>
>    <cman expected_votes="6"/>
>    <fencedevices>
>        <fencedevice agent="fence_apc" ipaddr="192.168.1.20" login="device"
> name="apc_pdu" passwd="wonderwomanWasAPrettyCoolSuperhero"/>
>    </fencedevices>
>    <rm>
>        <failoverdomains>
>            <failoverdomain name="fd1" nofailback="0" ordered="1"
> restricted="1">
>                <failoverdomainnode name="192.168.1.1" priority="1"/>
>                <failoverdomainnode name="192.168.1.2" priority="2"/>
>                <failoverdomainnode name="192.168.1.3" priority="3"/>
>                <failoverdomainnode name="192.168.1.4" priority="4"/>
>                <failoverdomainnode name="192.168.1.5" priority="5"/>
>            </failoverdomain>
>            <failoverdomain name="fd2" nofailback="0" ordered="1"
> restricted="1">
>                <failoverdomainnode name="192.168.1.1" priority="5"/>
>                <failoverdomainnode name="192.168.1.2" priority="1"/>
>                <failoverdomainnode name="192.168.1.3" priority="2"/>
>                <failoverdomainnode name="192.168.1.4" priority="3"/>
>                <failoverdomainnode name="192.168.1.5" priority="4"/>
>            </failoverdomain>
>            <failoverdomain name="fd3" nofailback="0" ordered="1"
> restricted="1">
>                <failoverdomainnode name="192.168.1.1" priority="4"/>
>                <failoverdomainnode name="192.168.1.2" priority="5"/>
>                <failoverdomainnode name="192.168.1.3" priority="1"/>
>                <failoverdomainnode name="192.168.1.4" priority="2"/>
>                <failoverdomainnode name="192.168.1.5" priority="3"/>
>            </failoverdomain>
>            <failoverdomain name="fd4" nofailback="0" ordered="1"
> restricted="1">
>                <failoverdomainnode name="192.168.1.1" priority="3"/>
>                <failoverdomainnode name="192.168.1.2" priority="4"/>
>                <failoverdomainnode name="192.168.1.3" priority="5"/>
>                <failoverdomainnode name="192.168.1.4" priority="1"/>
>                <failoverdomainnode name="192.168.1.5" priority="2"/>
>            </failoverdomain>
>        </failoverdomains>
>        <resources>
>            <ip address="10.1.1.1" monitor_link="1"/>
>            <ip address="10.1.1.2" monitor_link="1"/>
>            <ip address="10.1.1.3" monitor_link="1"/>
>            <ip address="10.1.1.4" monitor_link="1"/>
>            <ip address="10.1.1.5" monitor_link="1"/>
>            <script file="/usr/local/bin/service1" name="service1"/>
>            <script file="/usr/local/bin/service2" name="service2"/>
>            <script file="/usr/local/bin/service3" name="service3"/>
>            <script file="/usr/local/bin/service4" name="service4"/>
>       </resources>
>        <service autostart="1" domain="fd1" exclusive="1" name="service1"
> recovery="relocate">
>            <ip ref="10.1.1.1"/>
>            <script ref="service1"/>
>        </service>
>        <service autostart="1" domain="fd2" exclusive="1" name="service2"
> recovery="relocate">
>            <ip ref="10.1.1.2"/>
>            <script ref="service2"/>
>        </service>
>        <service autostart="1" domain="fd3" exclusive="1" name="service3"
> recovery="relocate">
>            <ip ref="10.1.1.3"/>
>            <script ref="service3"/>
>        </service>
>        <service autostart="1" domain="fd4" exclusive="1" name="service4"
> recovery="relocate">
>            <ip ref="10.1.1.4"/>
>            <script ref="service4"/>
>        </service>
>    </rm>
> </cluster>
>
>
>
> No virus found in this incoming message.
> Checked by AVG - www.avg.com
> Version: 9.0.819 / Virus Database: 271.1.1/2874 - Release Date: 05/14/10
> 20:26:00
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://www.redhat.com/archives/linux-cluster/attachments/20100515/4bc55bbe/attachment.html
> >
>
> ------------------------------
>
> --
> Linux-cluster mailing list
> Linux-cluster@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> End of Linux-cluster Digest, Vol 73, Issue 15
> *********************************************
>



-- 
Warm Regards
Parshuram Prasad
+91-9560170372
Sr. System Administrator & Database Administrator

Stratoshear Technology Pvt. Ltd.

BPS House Green Park -16
www.stratoshear.com

[Attachment #5 (text/html)]

please send me cluster script . i want to create two node clustering on linux \
5.3<br><br>thx <br>parshuram<br><br><br><div class="gmail_quote">On Sat, May 15, 2010 \
at 6:57 PM,  <span dir="ltr">&lt;<a \
href="mailto:linux-cluster-request@redhat.com">linux-cluster-request@redhat.com</a>&gt;</span> \
wrote:<br> <blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, \
204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Send Linux-cluster mailing \
                list submissions to<br>
        <a href="mailto:linux-cluster@redhat.com">linux-cluster@redhat.com</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
        <a href="https://www.redhat.com/mailman/listinfo/linux-cluster" \
target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br> or, via \
                email, send a message with subject or body &#39;help&#39; to<br>
        <a href="mailto:linux-cluster-request@redhat.com">linux-cluster-request@redhat.com</a><br>
 <br>
You can reach the person managing the list at<br>
        <a href="mailto:linux-cluster-owner@redhat.com">linux-cluster-owner@redhat.com</a><br>
 <br>
When replying, please edit your Subject line so it is more specific<br>
than &quot;Re: Contents of Linux-cluster digest...&quot;<br>
<br>
<br>
Today&#39;s Topics:<br>
<br>
   1. GFS on Debian Lenny (Brent Clark)<br>
   2. pull plug on node, service never relocates (Dusty)<br>
   3. Re: GFS on Debian Lenny (Joao Ferreira gmail)<br>
   4. Re: pull plug on node, service never relocates (Corey Kovacs)<br>
   5. Re: pull plug on node, service never relocates (Kit Gerrits)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Fri, 14 May 2010 20:26:46 +0200<br>
From: Brent Clark &lt;<a \
                href="mailto:brentgclarklist@gmail.com">brentgclarklist@gmail.com</a>&gt;<br>
                
To: linux clustering &lt;<a \
                href="mailto:linux-cluster@redhat.com">linux-cluster@redhat.com</a>&gt;<br>
                
Subject: [Linux-cluster] GFS on Debian Lenny<br>
Message-ID: &lt;<a href="mailto:4BED95E6.4040006@gmail.com">4BED95E6.4040006@gmail.com</a>&gt;<br>
                
Content-Type: text/plain; charset=ISO-8859-1; format=flowed<br>
<br>
Hiya<br>
<br>
Im trying to get GFS working on Debian Lenny. Unfortuantely<br>
documentation seems to be non existent. And the one site that google<br>
recommends, <a href="http://gcharriere.com" target="_blank">gcharriere.com</a>, is \
down.<br> <br>
I used googles caching mechanism to try and make head and tails of whats<br>
needed to be done, but unfortunately Im unsuccessful.<br>
<br>
Would anyone have any documentation or any sites or if you have a heart,<br>
provide a howto to get GFS working.<br>
<br>
 From myside, all ive done is:<br>
<br>
aptitude install gfs2-tools<br>
modprobe gfs2<br>
gfs_mkfs -p lock_dlm -t lolcats:drbdtest /dev/drbd0 -j 2<br>
<br>
thats all, ive done. No editting of configs etc.<br>
<br>
When I try,<br>
<br>
mount -t gfs2 /dev/drbd0 /drbd/<br>
<br>
I get the following message:<br>
<br>
/sbin/mount.gfs2: can&#39;t connect to gfs_controld: Connection refused<br>
/sbin/mount.gfs2: gfs_controld not running<br>
/sbin/mount.gfs2: error mounting lockproto lock_dlm<br>
<br>
If anyone can help, it would be appreciated.<br>
<br>
Kind Regards<br>
Brent Clark<br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Fri, 14 May 2010 14:45:11 -0500<br>
From: Dusty &lt;<a href="mailto:dhoffutt@gmail.com">dhoffutt@gmail.com</a>&gt;<br>
To: <a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br>
Subject: [Linux-cluster] pull plug on node, service never relocates<br>
Message-ID:<br>
        &lt;<a href="mailto:AANLkTil1ssNgEYRs71I_xmsLV3enagF76kEQYAt-Tdse@mail.gmail.com">AANLkTil1ssNgEYRs71I_xmsLV3enagF76kEQYAt-Tdse@mail.gmail.com</a>&gt;<br>
                
Content-Type: text/plain; charset=&quot;iso-8859-1&quot;<br>
<br>
Greetings,<br>
<br>
Using stock &quot;clustering&quot; and &quot;cluster-storage&quot; from RHEL5 update \
4 X86_64<br> ISO.<br>
<br>
As an example using my below config:<br>
<br>
Node1 is running service1, node2 is running service2, etc, etc, node5 is<br>
spare and available for the relocation of any failover domain / cluster<br>
service.<br>
<br>
If I go into the APC PDU and turn off the electrical port to node1, node2<br>
will fence node1 (going into the APC PDU and doing and off, on on node1&#39;s<br>
port), this is fine. Works well. When node1 comes back up, then it shuts<br>
down service1 and service1 relocates to node5.<br>
<br>
Now if I go in the lab and literally pull the plug on node5 running<br>
service1, another node fences node5 via the APC - can check the APC PDU log<br>
and see that it has done an off/on on node5&#39;s electrical port just fine.<br>
<br>
But I pulled the plug on node5 - resetting the power doesn&#39;t matter. I want<br>
to simulate a completely dead node, and have the service relocate in this<br>
case of complete node failure.<br>
<br>
In this RHEL5.4 cluster, the service never relocates. I can similate this on<br>
any node for any service. What if a node&#39;s motherboard fries?<br>
<br>
What can I set to have the remaining nodes stop waiting for the reboot of a<br>
failed node and just go ahead and relocate the cluster service that had been<br>
running on the now failed node?<br>
<br>
Thank you!<br>
<br>
versions:<br>
<br>
cman-2.0.115-1.el5<br>
openais-0.80.6-8.el5<br>
modcluster-0.12.1-2.el5<br>
lvm2-cluster-2.02.46-8.el5<br>
rgmanager-2.0.52-1.el5<br>
ricci-0.12.2-6.el5<br>
<br>
cluster.conf (sanitized, real scripts removed, all gfs2 mounts gone for<br>
clarity):<br>
&lt;?xml version=&quot;1.0&quot;?&gt;<br>
&lt;cluster config_version=&quot;1&quot;<br>
name=&quot;alderaanDefenseShieldRebelAllianceCluster&quot;&gt;<br>
    &lt;fence_daemon clean_start=&quot;0&quot; post_fail_delay=&quot;3&quot; \
post_join_delay=&quot;60&quot;/&gt;<br>  &lt;clusternodes&gt;<br>
        &lt;clusternode name=&quot;192.168.1.1&quot; nodeid=&quot;1&quot; \
votes=&quot;1&quot;&gt;<br>  &lt;fence&gt;<br>
                &lt;method name=&quot;1&quot;&gt;<br>
                    &lt;device name=&quot;apc_pdu&quot; port=&quot;1&quot; \
switch=&quot;1&quot;/&gt;<br>  &lt;/method&gt;<br>
            &lt;/fence&gt;<br>
        &lt;/clusternode&gt;<br>
        &lt;clusternode name=&quot;192.168.1.2&quot; nodeid=&quot;2&quot; \
votes=&quot;1&quot;&gt;<br>  &lt;fence&gt;<br>
                &lt;method name=&quot;1&quot;&gt;<br>
                    &lt;device name=&quot;apc_pdu&quot; port=&quot;2&quot; \
switch=&quot;1&quot;/&gt;<br>  &lt;/method&gt;<br>
            &lt;/fence&gt;<br>
        &lt;/clusternode&gt;<br>
        &lt;clusternode name=&quot;192.168.1.3&quot; nodeid=&quot;3&quot; \
votes=&quot;1&quot;&gt;<br>  &lt;fence&gt;<br>
                &lt;method name=&quot;1&quot;&gt;<br>
                    &lt;device name=&quot;apc_pdu&quot; port=&quot;3&quot; \
switch=&quot;1&quot;/&gt;<br>  &lt;/method&gt;<br>
            &lt;/fence&gt;<br>
        &lt;/clusternode&gt;<br>
        &lt;clusternode name=&quot;192.168.1.4&quot; nodeid=&quot;4&quot; \
votes=&quot;1&quot;&gt;<br>  &lt;fence&gt;<br>
                &lt;method name=&quot;1&quot;&gt;<br>
                    &lt;device name=&quot;apc_pdu&quot; port=&quot;4&quot; \
switch=&quot;1&quot;/&gt;<br>  &lt;/method&gt;<br>
            &lt;/fence&gt;<br>
        &lt;/clusternode&gt;<br>
        &lt;clusternode name=&quot;192.168.1.5&quot; nodeid=&quot;5&quot; \
votes=&quot;1&quot;&gt;<br>  &lt;fence&gt;<br>
                &lt;method name=&quot;1&quot;&gt;<br>
                    &lt;device name=&quot;apc_pdu&quot; port=&quot;5&quot; \
switch=&quot;1&quot;/&gt;<br>  &lt;/method&gt;<br>
            &lt;/fence&gt;<br>
        &lt;/clusternode&gt;<br>
    &lt;/clusternodes&gt;<br>
    &lt;cman expected_votes=&quot;6&quot;/&gt;<br>
    &lt;fencedevices&gt;<br>
        &lt;fencedevice agent=&quot;fence_apc&quot; ipaddr=&quot;192.168.1.20&quot; \
login=&quot;device&quot;<br> name=&quot;apc_pdu&quot; \
passwd=&quot;wonderwomanWasAPrettyCoolSuperhero&quot;/&gt;<br>  \
&lt;/fencedevices&gt;<br>  &lt;rm&gt;<br>
        &lt;failoverdomains&gt;<br>
            &lt;failoverdomain name=&quot;fd1&quot; nofailback=&quot;0&quot; \
ordered=&quot;1&quot;<br> restricted=&quot;1&quot;&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.1&quot; \
                priority=&quot;1&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.2&quot; \
                priority=&quot;2&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.3&quot; \
                priority=&quot;3&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.4&quot; \
                priority=&quot;4&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.5&quot; \
priority=&quot;5&quot;/&gt;<br>  &lt;/failoverdomain&gt;<br>
            &lt;failoverdomain name=&quot;fd2&quot; nofailback=&quot;0&quot; \
ordered=&quot;1&quot;<br> restricted=&quot;1&quot;&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.1&quot; \
                priority=&quot;5&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.2&quot; \
                priority=&quot;1&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.3&quot; \
                priority=&quot;2&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.4&quot; \
                priority=&quot;3&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.5&quot; \
priority=&quot;4&quot;/&gt;<br>  &lt;/failoverdomain&gt;<br>
            &lt;failoverdomain name=&quot;fd3&quot; nofailback=&quot;0&quot; \
ordered=&quot;1&quot;<br> restricted=&quot;1&quot;&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.1&quot; \
                priority=&quot;4&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.2&quot; \
                priority=&quot;5&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.3&quot; \
                priority=&quot;1&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.4&quot; \
                priority=&quot;2&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.5&quot; \
priority=&quot;3&quot;/&gt;<br>  &lt;/failoverdomain&gt;<br>
            &lt;failoverdomain name=&quot;fd4&quot; nofailback=&quot;0&quot; \
ordered=&quot;1&quot;<br> restricted=&quot;1&quot;&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.1&quot; \
                priority=&quot;3&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.2&quot; \
                priority=&quot;4&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.3&quot; \
                priority=&quot;5&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.4&quot; \
                priority=&quot;1&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.5&quot; \
priority=&quot;2&quot;/&gt;<br>  &lt;/failoverdomain&gt;<br>
        &lt;/failoverdomains&gt;<br>
        &lt;resources&gt;<br>
            &lt;ip address=&quot;10.1.1.1&quot; monitor_link=&quot;1&quot;/&gt;<br>
            &lt;ip address=&quot;10.1.1.2&quot; monitor_link=&quot;1&quot;/&gt;<br>
            &lt;ip address=&quot;10.1.1.3&quot; monitor_link=&quot;1&quot;/&gt;<br>
            &lt;ip address=&quot;10.1.1.4&quot; monitor_link=&quot;1&quot;/&gt;<br>
            &lt;ip address=&quot;10.1.1.5&quot; monitor_link=&quot;1&quot;/&gt;<br>
            &lt;script file=&quot;/usr/local/bin/service1&quot; \
                name=&quot;service1&quot;/&gt;<br>
            &lt;script file=&quot;/usr/local/bin/service2&quot; \
                name=&quot;service2&quot;/&gt;<br>
            &lt;script file=&quot;/usr/local/bin/service3&quot; \
                name=&quot;service3&quot;/&gt;<br>
            &lt;script file=&quot;/usr/local/bin/service4&quot; \
name=&quot;service4&quot;/&gt;<br>  &lt;/resources&gt;<br>
        &lt;service autostart=&quot;1&quot; domain=&quot;fd1&quot; \
exclusive=&quot;1&quot; name=&quot;service1&quot;<br> \
recovery=&quot;relocate&quot;&gt;<br>  &lt;ip ref=&quot;10.1.1.1&quot;/&gt;<br>
            &lt;script ref=&quot;service1&quot;/&gt;<br>
        &lt;/service&gt;<br>
        &lt;service autostart=&quot;1&quot; domain=&quot;fd2&quot; \
exclusive=&quot;1&quot; name=&quot;service2&quot;<br> \
recovery=&quot;relocate&quot;&gt;<br>  &lt;ip ref=&quot;10.1.1.2&quot;/&gt;<br>
            &lt;script ref=&quot;service2&quot;/&gt;<br>
        &lt;/service&gt;<br>
        &lt;service autostart=&quot;1&quot; domain=&quot;fd3&quot; \
exclusive=&quot;1&quot; name=&quot;service3&quot;<br> \
recovery=&quot;relocate&quot;&gt;<br>  &lt;ip ref=&quot;10.1.1.3&quot;/&gt;<br>
            &lt;script ref=&quot;service3&quot;/&gt;<br>
        &lt;/service&gt;<br>
        &lt;service autostart=&quot;1&quot; domain=&quot;fd4&quot; \
exclusive=&quot;1&quot; name=&quot;service4&quot;<br> \
recovery=&quot;relocate&quot;&gt;<br>  &lt;ip ref=&quot;10.1.1.4&quot;/&gt;<br>
            &lt;script ref=&quot;service4&quot;/&gt;<br>
        &lt;/service&gt;<br>
    &lt;/rm&gt;<br>
&lt;/cluster&gt;<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: &lt;<a href="https://www.redhat.com/archives/linux-cluster/attachments/20100514/c892bf86/attachment.html" \
target="_blank">https://www.redhat.com/archives/linux-cluster/attachments/20100514/c892bf86/attachment.html</a>&gt;<br>


<br>
------------------------------<br>
<br>
Message: 3<br>
Date: Fri, 14 May 2010 23:31:41 +0100<br>
From: Joao Ferreira gmail &lt;<a \
href="mailto:joao.miguel.c.ferreira@gmail.com">joao.miguel.c.ferreira@gmail.com</a>&gt;<br>
                
To: linux clustering &lt;<a \
                href="mailto:linux-cluster@redhat.com">linux-cluster@redhat.com</a>&gt;<br>
                
Subject: Re: [Linux-cluster] GFS on Debian Lenny<br>
Message-ID: &lt;<a href="mailto:1273876301.5298.1.camel@debj5n.critical.pt">1273876301.5298.1.camel@debj5n.critical.pt</a>&gt;<br>
                
Content-Type: text/plain<br>
<br>
Have you checked the docs at the drbd site ?<br>
<br>
it contains some short info regarding usage of gsf over drbd..<br>
<br>
<a href="http://www.drbd.org/docs/applications/" \
target="_blank">http://www.drbd.org/docs/applications/</a><br> <br>
cheers<br>
Joao<br>
<br>
On Fri, 2010-05-14 at 20:26 +0200, Brent Clark wrote:<br>
&gt; Hiya<br>
&gt;<br>
&gt; Im trying to get GFS working on Debian Lenny. Unfortuantely<br>
&gt; documentation seems to be non existent. And the one site that google<br>
&gt; recommends, <a href="http://gcharriere.com" target="_blank">gcharriere.com</a>, \
is down.<br> &gt;<br>
&gt; I used googles caching mechanism to try and make head and tails of whats<br>
&gt; needed to be done, but unfortunately Im unsuccessful.<br>
&gt;<br>
&gt; Would anyone have any documentation or any sites or if you have a heart,<br>
&gt; provide a howto to get GFS working.<br>
&gt;<br>
&gt;  From myside, all ive done is:<br>
&gt;<br>
&gt; aptitude install gfs2-tools<br>
&gt; modprobe gfs2<br>
&gt; gfs_mkfs -p lock_dlm -t lolcats:drbdtest /dev/drbd0 -j 2<br>
&gt;<br>
&gt; thats all, ive done. No editting of configs etc.<br>
&gt;<br>
&gt; When I try,<br>
&gt;<br>
&gt; mount -t gfs2 /dev/drbd0 /drbd/<br>
&gt;<br>
&gt; I get the following message:<br>
&gt;<br>
&gt; /sbin/mount.gfs2: can&#39;t connect to gfs_controld: Connection refused<br>
&gt; /sbin/mount.gfs2: gfs_controld not running<br>
&gt; /sbin/mount.gfs2: error mounting lockproto lock_dlm<br>
&gt;<br>
&gt; If anyone can help, it would be appreciated.<br>
&gt;<br>
&gt; Kind Regards<br>
&gt; Brent Clark<br>
&gt;<br>
&gt; --<br>
&gt; Linux-cluster mailing list<br>
&gt; <a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br>
&gt; <a href="https://www.redhat.com/mailman/listinfo/linux-cluster" \
target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br> <br>
<br>
<br>
------------------------------<br>
<br>
Message: 4<br>
Date: Sat, 15 May 2010 04:59:23 +0100<br>
From: Corey Kovacs &lt;<a \
                href="mailto:corey.kovacs@gmail.com">corey.kovacs@gmail.com</a>&gt;<br>
                
To: linux clustering &lt;<a \
                href="mailto:linux-cluster@redhat.com">linux-cluster@redhat.com</a>&gt;<br>
                
Subject: Re: [Linux-cluster] pull plug on node, service never<br>
        relocates<br>
Message-ID:<br>
        &lt;<a href="mailto:AANLkTinYVvrit1oPb76TfLa9vmp1AMHcGI3eoZALHxrJ@mail.gmail.com">AANLkTinYVvrit1oPb76TfLa9vmp1AMHcGI3eoZALHxrJ@mail.gmail.com</a>&gt;<br>
                
Content-Type: text/plain; charset=ISO-8859-1<br>
<br>
What happens when you do ...<br>
<br>
fence_node 192.168.1.4<br>
<br>
from any of the other nodes?<br>
<br>
if that doesn&#39;t work, then fencing is not configured correctly and you<br>
should try to invoke the fence agent directly.<br>
Also, it would help if you included the APC model and firmware rev.<br>
The fence_apc agent can be finicky about such things.<br>
<br>
<br>
Hope this helps.<br>
<br>
-Core<br>
<br>
On Fri, May 14, 2010 at 8:45 PM, Dusty &lt;<a \
href="mailto:dhoffutt@gmail.com">dhoffutt@gmail.com</a>&gt; wrote:<br> &gt; \
Greetings,<br> &gt;<br>
&gt; Using stock &quot;clustering&quot; and &quot;cluster-storage&quot; from RHEL5 \
update 4 X86_64<br> &gt; ISO.<br>
&gt;<br>
&gt; As an example using my below config:<br>
&gt;<br>
&gt; Node1 is running service1, node2 is running service2, etc, etc, node5 is<br>
&gt; spare and available for the relocation of any failover domain / cluster<br>
&gt; service.<br>
&gt;<br>
&gt; If I go into the APC PDU and turn off the electrical port to node1, node2<br>
&gt; will fence node1 (going into the APC PDU and doing and off, on on \
node1&#39;s<br> &gt; port), this is fine. Works well. When node1 comes back up, then \
it shuts<br> &gt; down service1 and service1 relocates to node5.<br>
&gt;<br>
&gt; Now if I go in the lab and literally pull the plug on node5 running<br>
&gt; service1, another node fences node5 via the APC - can check the APC PDU log<br>
&gt; and see that it has done an off/on on node5&#39;s electrical port just fine.<br>
&gt;<br>
&gt; But I pulled the plug on node5 - resetting the power doesn&#39;t matter. I \
want<br> &gt; to simulate a completely dead node, and have the service relocate in \
this<br> &gt; case of complete node failure.<br>
&gt;<br>
&gt; In this RHEL5.4 cluster, the service never relocates. I can similate this on<br>
&gt; any node for any service. What if a node&#39;s motherboard fries?<br>
&gt;<br>
&gt; What can I set to have the remaining nodes stop waiting for the reboot of a<br>
&gt; failed node and just go ahead and relocate the cluster service that had been<br>
&gt; running on the now failed node?<br>
&gt;<br>
&gt; Thank you!<br>
&gt;<br>
&gt; versions:<br>
&gt;<br>
&gt; cman-2.0.115-1.el5<br>
&gt; openais-0.80.6-8.el5<br>
&gt; modcluster-0.12.1-2.el5<br>
&gt; lvm2-cluster-2.02.46-8.el5<br>
&gt; rgmanager-2.0.52-1.el5<br>
&gt; ricci-0.12.2-6.el5<br>
&gt;<br>
&gt; cluster.conf (sanitized, real scripts removed, all gfs2 mounts gone for<br>
&gt; clarity):<br>
&gt; &lt;?xml version=&quot;1.0&quot;?&gt;<br>
&gt; &lt;cluster config_version=&quot;1&quot;<br>
&gt; name=&quot;alderaanDefenseShieldRebelAllianceCluster&quot;&gt;<br>
&gt; ??? &lt;fence_daemon clean_start=&quot;0&quot; post_fail_delay=&quot;3&quot; \
post_join_delay=&quot;60&quot;/&gt;<br> &gt; ??? &lt;clusternodes&gt;<br>
&gt; ??????? &lt;clusternode name=&quot;192.168.1.1&quot; nodeid=&quot;1&quot; \
votes=&quot;1&quot;&gt;<br> &gt; ??????????? &lt;fence&gt;<br>
&gt; ??????????????? &lt;method name=&quot;1&quot;&gt;<br>
&gt; ??????????????????? &lt;device name=&quot;apc_pdu&quot; port=&quot;1&quot; \
switch=&quot;1&quot;/&gt;<br> &gt; ??????????????? &lt;/method&gt;<br>
&gt; ??????????? &lt;/fence&gt;<br>
&gt; ??????? &lt;/clusternode&gt;<br>
&gt; ??????? &lt;clusternode name=&quot;192.168.1.2&quot; nodeid=&quot;2&quot; \
votes=&quot;1&quot;&gt;<br> &gt; ??????????? &lt;fence&gt;<br>
&gt; ??????????????? &lt;method name=&quot;1&quot;&gt;<br>
&gt; ??????????????????? &lt;device name=&quot;apc_pdu&quot; port=&quot;2&quot; \
switch=&quot;1&quot;/&gt;<br> &gt; ??????????????? &lt;/method&gt;<br>
&gt; ??????????? &lt;/fence&gt;<br>
&gt; ??????? &lt;/clusternode&gt;<br>
&gt; ??????? &lt;clusternode name=&quot;192.168.1.3&quot; nodeid=&quot;3&quot; \
votes=&quot;1&quot;&gt;<br> &gt; ??????????? &lt;fence&gt;<br>
&gt; ??????????????? &lt;method name=&quot;1&quot;&gt;<br>
&gt; ??????????????????? &lt;device name=&quot;apc_pdu&quot; port=&quot;3&quot; \
switch=&quot;1&quot;/&gt;<br> &gt; ??????????????? &lt;/method&gt;<br>
&gt; ??????????? &lt;/fence&gt;<br>
&gt; ??????? &lt;/clusternode&gt;<br>
&gt; ??????? &lt;clusternode name=&quot;192.168.1.4&quot; nodeid=&quot;4&quot; \
votes=&quot;1&quot;&gt;<br> &gt; ??????????? &lt;fence&gt;<br>
&gt; ??????????????? &lt;method name=&quot;1&quot;&gt;<br>
&gt; ??????????????????? &lt;device name=&quot;apc_pdu&quot; port=&quot;4&quot; \
switch=&quot;1&quot;/&gt;<br> &gt; ??????????????? &lt;/method&gt;<br>
&gt; ??????????? &lt;/fence&gt;<br>
&gt; ??????? &lt;/clusternode&gt;<br>
&gt; ??????? &lt;clusternode name=&quot;192.168.1.5&quot; nodeid=&quot;5&quot; \
votes=&quot;1&quot;&gt;<br> &gt; ??????????? &lt;fence&gt;<br>
&gt; ??????????????? &lt;method name=&quot;1&quot;&gt;<br>
&gt; ??????????????????? &lt;device name=&quot;apc_pdu&quot; port=&quot;5&quot; \
switch=&quot;1&quot;/&gt;<br> &gt; ??????????????? &lt;/method&gt;<br>
&gt; ??????????? &lt;/fence&gt;<br>
&gt; ??????? &lt;/clusternode&gt;<br>
&gt; ??? &lt;/clusternodes&gt;<br>
&gt; ??? &lt;cman expected_votes=&quot;6&quot;/&gt;<br>
&gt; ??? &lt;fencedevices&gt;<br>
&gt; ??????? &lt;fencedevice agent=&quot;fence_apc&quot; \
ipaddr=&quot;192.168.1.20&quot; login=&quot;device&quot;<br> &gt; \
name=&quot;apc_pdu&quot; \
passwd=&quot;wonderwomanWasAPrettyCoolSuperhero&quot;/&gt;<br> &gt; ??? \
&lt;/fencedevices&gt;<br> &gt; ??? &lt;rm&gt;<br>
&gt; ??????? &lt;failoverdomains&gt;<br>
&gt; ??????????? &lt;failoverdomain name=&quot;fd1&quot; nofailback=&quot;0&quot; \
ordered=&quot;1&quot;<br> &gt; restricted=&quot;1&quot;&gt;<br>
&gt; ??????????????? &lt;failoverdomainnode name=&quot;192.168.1.1&quot; \
priority=&quot;1&quot;/&gt;<br> &gt; ??????????????? &lt;failoverdomainnode \
name=&quot;192.168.1.2&quot; priority=&quot;2&quot;/&gt;<br> &gt; ??????????????? \
&lt;failoverdomainnode name=&quot;192.168.1.3&quot; priority=&quot;3&quot;/&gt;<br> \
&gt; ??????????????? &lt;failoverdomainnode name=&quot;192.168.1.4&quot; \
priority=&quot;4&quot;/&gt;<br> &gt; ??????????????? &lt;failoverdomainnode \
name=&quot;192.168.1.5&quot; priority=&quot;5&quot;/&gt;<br> &gt; ??????????? \
&lt;/failoverdomain&gt;<br> &gt; ??????????? &lt;failoverdomain name=&quot;fd2&quot; \
nofailback=&quot;0&quot; ordered=&quot;1&quot;<br> &gt; \
restricted=&quot;1&quot;&gt;<br> &gt; ??????????????? &lt;failoverdomainnode \
name=&quot;192.168.1.1&quot; priority=&quot;5&quot;/&gt;<br> &gt; ??????????????? \
&lt;failoverdomainnode name=&quot;192.168.1.2&quot; priority=&quot;1&quot;/&gt;<br> \
&gt; ??????????????? &lt;failoverdomainnode name=&quot;192.168.1.3&quot; \
priority=&quot;2&quot;/&gt;<br> &gt; ??????????????? &lt;failoverdomainnode \
name=&quot;192.168.1.4&quot; priority=&quot;3&quot;/&gt;<br> &gt; ??????????????? \
&lt;failoverdomainnode name=&quot;192.168.1.5&quot; priority=&quot;4&quot;/&gt;<br> \
&gt; ??????????? &lt;/failoverdomain&gt;<br> &gt; ??????????? &lt;failoverdomain \
name=&quot;fd3&quot; nofailback=&quot;0&quot; ordered=&quot;1&quot;<br> &gt; \
restricted=&quot;1&quot;&gt;<br> &gt; ??????????????? &lt;failoverdomainnode \
name=&quot;192.168.1.1&quot; priority=&quot;4&quot;/&gt;<br> &gt; ??????????????? \
&lt;failoverdomainnode name=&quot;192.168.1.2&quot; priority=&quot;5&quot;/&gt;<br> \
&gt; ??????????????? &lt;failoverdomainnode name=&quot;192.168.1.3&quot; \
priority=&quot;1&quot;/&gt;<br> &gt; ??????????????? &lt;failoverdomainnode \
name=&quot;192.168.1.4&quot; priority=&quot;2&quot;/&gt;<br> &gt; ??????????????? \
&lt;failoverdomainnode name=&quot;192.168.1.5&quot; priority=&quot;3&quot;/&gt;<br> \
&gt; ??????????? &lt;/failoverdomain&gt;<br> &gt; ??????????? &lt;failoverdomain \
name=&quot;fd4&quot; nofailback=&quot;0&quot; ordered=&quot;1&quot;<br> &gt; \
restricted=&quot;1&quot;&gt;<br> &gt; ??????????????? &lt;failoverdomainnode \
name=&quot;192.168.1.1&quot; priority=&quot;3&quot;/&gt;<br> &gt; ??????????????? \
&lt;failoverdomainnode name=&quot;192.168.1.2&quot; priority=&quot;4&quot;/&gt;<br> \
&gt; ??????????????? &lt;failoverdomainnode name=&quot;192.168.1.3&quot; \
priority=&quot;5&quot;/&gt;<br> &gt; ??????????????? &lt;failoverdomainnode \
name=&quot;192.168.1.4&quot; priority=&quot;1&quot;/&gt;<br> &gt; ??????????????? \
&lt;failoverdomainnode name=&quot;192.168.1.5&quot; priority=&quot;2&quot;/&gt;<br> \
&gt; ??????????? &lt;/failoverdomain&gt;<br> &gt; ??????? \
&lt;/failoverdomains&gt;<br> &gt; ??????? &lt;resources&gt;<br>
&gt; ??????????? &lt;ip address=&quot;10.1.1.1&quot; \
monitor_link=&quot;1&quot;/&gt;<br> &gt; ??????????? &lt;ip \
address=&quot;10.1.1.2&quot; monitor_link=&quot;1&quot;/&gt;<br> &gt; ??????????? \
&lt;ip address=&quot;10.1.1.3&quot; monitor_link=&quot;1&quot;/&gt;<br> &gt; \
??????????? &lt;ip address=&quot;10.1.1.4&quot; monitor_link=&quot;1&quot;/&gt;<br> \
&gt; ??????????? &lt;ip address=&quot;10.1.1.5&quot; \
monitor_link=&quot;1&quot;/&gt;<br> &gt; ??????????? &lt;script \
file=&quot;/usr/local/bin/service1&quot; name=&quot;service1&quot;/&gt;<br> &gt; \
??????????? &lt;script file=&quot;/usr/local/bin/service2&quot; \
name=&quot;service2&quot;/&gt;<br> &gt; ??????????? &lt;script \
file=&quot;/usr/local/bin/service3&quot; name=&quot;service3&quot;/&gt;<br> &gt; \
??????????? &lt;script file=&quot;/usr/local/bin/service4&quot; \
name=&quot;service4&quot;/&gt;<br> &gt; ?????? &lt;/resources&gt;<br>
&gt; ??????? &lt;service autostart=&quot;1&quot; domain=&quot;fd1&quot; \
exclusive=&quot;1&quot; name=&quot;service1&quot;<br> &gt; \
recovery=&quot;relocate&quot;&gt;<br> &gt; ??????????? &lt;ip \
ref=&quot;10.1.1.1&quot;/&gt;<br> &gt; ??????????? &lt;script \
ref=&quot;service1&quot;/&gt;<br> &gt; ??????? &lt;/service&gt;<br>
&gt; ??????? &lt;service autostart=&quot;1&quot; domain=&quot;fd2&quot; \
exclusive=&quot;1&quot; name=&quot;service2&quot;<br> &gt; \
recovery=&quot;relocate&quot;&gt;<br> &gt; ??????????? &lt;ip \
ref=&quot;10.1.1.2&quot;/&gt;<br> &gt; ??????????? &lt;script \
ref=&quot;service2&quot;/&gt;<br> &gt; ??????? &lt;/service&gt;<br>
&gt; ??????? &lt;service autostart=&quot;1&quot; domain=&quot;fd3&quot; \
exclusive=&quot;1&quot; name=&quot;service3&quot;<br> &gt; \
recovery=&quot;relocate&quot;&gt;<br> &gt; ??????????? &lt;ip \
ref=&quot;10.1.1.3&quot;/&gt;<br> &gt; ??????????? &lt;script \
ref=&quot;service3&quot;/&gt;<br> &gt; ??????? &lt;/service&gt;<br>
&gt; ??????? &lt;service autostart=&quot;1&quot; domain=&quot;fd4&quot; \
exclusive=&quot;1&quot; name=&quot;service4&quot;<br> &gt; \
recovery=&quot;relocate&quot;&gt;<br> &gt; ??????????? &lt;ip \
ref=&quot;10.1.1.4&quot;/&gt;<br> &gt; ??????????? &lt;script \
ref=&quot;service4&quot;/&gt;<br> &gt; ??????? &lt;/service&gt;<br>
&gt; ??? &lt;/rm&gt;<br>
&gt; &lt;/cluster&gt;<br>
&gt;<br>
&gt;<br>
&gt; --<br>
&gt; Linux-cluster mailing list<br>
&gt; <a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br>
&gt; <a href="https://www.redhat.com/mailman/listinfo/linux-cluster" \
target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br> \
&gt;<br> <br>
<br>
<br>
------------------------------<br>
<br>
Message: 5<br>
Date: Sat, 15 May 2010 15:26:49 +0200<br>
From: &quot;Kit Gerrits&quot; &lt;<a \
                href="mailto:kitgerrits@gmail.com">kitgerrits@gmail.com</a>&gt;<br>
To: &quot;&#39;linux clustering&#39;&quot; &lt;<a \
                href="mailto:linux-cluster@redhat.com">linux-cluster@redhat.com</a>&gt;<br>
                
Subject: Re: [Linux-cluster] pull plug on node, service never<br>
        relocates<br>
Message-ID: &lt;<a href="mailto:4beea118.1067f10a.4a1f.ffff8975@mx.google.com">4beea118.1067f10a.4a1f.ffff8975@mx.google.com</a>&gt;<br>
                
Content-Type: text/plain; charset=&quot;us-ascii&quot;<br>
<br>
<br>
Hello,<br>
<br>
You might want to check the syslog to see if the cluster has noticed the<br>
outage and what is has tried to do about it.<br>
You can also check the node status via &#39;cman nodes&#39; (explanaation of \
states<br> in the cman manpage).<br>
Does the server have another power source, by any chance?<br>
  (if not make sure you DO have dual power supplies. These things die Often)<br>
<br>
<br>
Regards,<br>
<br>
Kit<br>
<br>
  _____<br>
<br>
From: <a href="mailto:linux-cluster-bounces@redhat.com">linux-cluster-bounces@redhat.com</a><br>
 [mailto:<a href="mailto:linux-cluster-bounces@redhat.com">linux-cluster-bounces@redhat.com</a>] \
                On Behalf Of Dusty<br>
Sent: vrijdag 14 mei 2010 21:45<br>
To: <a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br>
Subject: [Linux-cluster] pull plug on node, service never relocates<br>
<br>
<br>
Greetings,<br>
<br>
Using stock &quot;clustering&quot; and &quot;cluster-storage&quot; from RHEL5 update \
4 X86_64<br> ISO.<br>
<br>
As an example using my below config:<br>
<br>
Node1 is running service1, node2 is running service2, etc, etc, node5 is<br>
spare and available for the relocation of any failover domain / cluster<br>
service.<br>
<br>
If I go into the APC PDU and turn off the electrical port to node1, node2<br>
will fence node1 (going into the APC PDU and doing and off, on on node1&#39;s<br>
port), this is fine. Works well. When node1 comes back up, then it shuts<br>
down service1 and service1 relocates to node5.<br>
<br>
Now if I go in the lab and literally pull the plug on node5 running<br>
service1, another node fences node5 via the APC - can check the APC PDU log<br>
and see that it has done an off/on on node5&#39;s electrical port just fine.<br>
<br>
But I pulled the plug on node5 - resetting the power doesn&#39;t matter. I want<br>
to simulate a completely dead node, and have the service relocate in this<br>
case of complete node failure.<br>
<br>
In this RHEL5.4 cluster, the service never relocates. I can similate this on<br>
any node for any service. What if a node&#39;s motherboard fries?<br>
<br>
What can I set to have the remaining nodes stop waiting for the reboot of a<br>
failed node and just go ahead and relocate the cluster service that had been<br>
running on the now failed node?<br>
<br>
Thank you!<br>
<br>
versions:<br>
<br>
cman-2.0.115-1.el5<br>
openais-0.80.6-8.el5<br>
modcluster-0.12.1-2.el5<br>
lvm2-cluster-2.02.46-8.el5<br>
rgmanager-2.0.52-1.el5<br>
ricci-0.12.2-6.el5<br>
<br>
cluster.conf (sanitized, real scripts removed, all gfs2 mounts gone for<br>
clarity):<br>
&lt;?xml version=&quot;1.0&quot;?&gt;<br>
&lt;cluster config_version=&quot;1&quot;<br>
name=&quot;alderaanDefenseShieldRebelAllianceCluster&quot;&gt;<br>
    &lt;fence_daemon clean_start=&quot;0&quot; post_fail_delay=&quot;3&quot; \
post_join_delay=&quot;60&quot;/&gt;<br>  &lt;clusternodes&gt;<br>
        &lt;clusternode name=&quot;192.168.1.1&quot; nodeid=&quot;1&quot; \
votes=&quot;1&quot;&gt;<br>  &lt;fence&gt;<br>
                &lt;method name=&quot;1&quot;&gt;<br>
                    &lt;device name=&quot;apc_pdu&quot; port=&quot;1&quot; \
switch=&quot;1&quot;/&gt;<br>  &lt;/method&gt;<br>
            &lt;/fence&gt;<br>
        &lt;/clusternode&gt;<br>
        &lt;clusternode name=&quot;192.168.1.2&quot; nodeid=&quot;2&quot; \
votes=&quot;1&quot;&gt;<br>  &lt;fence&gt;<br>
                &lt;method name=&quot;1&quot;&gt;<br>
                    &lt;device name=&quot;apc_pdu&quot; port=&quot;2&quot; \
switch=&quot;1&quot;/&gt;<br>  &lt;/method&gt;<br>
            &lt;/fence&gt;<br>
        &lt;/clusternode&gt;<br>
        &lt;clusternode name=&quot;192.168.1.3&quot; nodeid=&quot;3&quot; \
votes=&quot;1&quot;&gt;<br>  &lt;fence&gt;<br>
                &lt;method name=&quot;1&quot;&gt;<br>
                    &lt;device name=&quot;apc_pdu&quot; port=&quot;3&quot; \
switch=&quot;1&quot;/&gt;<br>  &lt;/method&gt;<br>
            &lt;/fence&gt;<br>
        &lt;/clusternode&gt;<br>
        &lt;clusternode name=&quot;192.168.1.4&quot; nodeid=&quot;4&quot; \
votes=&quot;1&quot;&gt;<br>  &lt;fence&gt;<br>
                &lt;method name=&quot;1&quot;&gt;<br>
                    &lt;device name=&quot;apc_pdu&quot; port=&quot;4&quot; \
switch=&quot;1&quot;/&gt;<br>  &lt;/method&gt;<br>
            &lt;/fence&gt;<br>
        &lt;/clusternode&gt;<br>
        &lt;clusternode name=&quot;192.168.1.5&quot; nodeid=&quot;5&quot; \
votes=&quot;1&quot;&gt;<br>  &lt;fence&gt;<br>
                &lt;method name=&quot;1&quot;&gt;<br>
                    &lt;device name=&quot;apc_pdu&quot; port=&quot;5&quot; \
switch=&quot;1&quot;/&gt;<br>  &lt;/method&gt;<br>
            &lt;/fence&gt;<br>
        &lt;/clusternode&gt;<br>
    &lt;/clusternodes&gt;<br>
    &lt;cman expected_votes=&quot;6&quot;/&gt;<br>
    &lt;fencedevices&gt;<br>
        &lt;fencedevice agent=&quot;fence_apc&quot; ipaddr=&quot;192.168.1.20&quot; \
login=&quot;device&quot;<br> name=&quot;apc_pdu&quot; \
passwd=&quot;wonderwomanWasAPrettyCoolSuperhero&quot;/&gt;<br>  \
&lt;/fencedevices&gt;<br>  &lt;rm&gt;<br>
        &lt;failoverdomains&gt;<br>
            &lt;failoverdomain name=&quot;fd1&quot; nofailback=&quot;0&quot; \
ordered=&quot;1&quot;<br> restricted=&quot;1&quot;&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.1&quot; \
                priority=&quot;1&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.2&quot; \
                priority=&quot;2&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.3&quot; \
                priority=&quot;3&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.4&quot; \
                priority=&quot;4&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.5&quot; \
priority=&quot;5&quot;/&gt;<br>  &lt;/failoverdomain&gt;<br>
            &lt;failoverdomain name=&quot;fd2&quot; nofailback=&quot;0&quot; \
ordered=&quot;1&quot;<br> restricted=&quot;1&quot;&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.1&quot; \
                priority=&quot;5&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.2&quot; \
                priority=&quot;1&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.3&quot; \
                priority=&quot;2&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.4&quot; \
                priority=&quot;3&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.5&quot; \
priority=&quot;4&quot;/&gt;<br>  &lt;/failoverdomain&gt;<br>
            &lt;failoverdomain name=&quot;fd3&quot; nofailback=&quot;0&quot; \
ordered=&quot;1&quot;<br> restricted=&quot;1&quot;&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.1&quot; \
                priority=&quot;4&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.2&quot; \
                priority=&quot;5&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.3&quot; \
                priority=&quot;1&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.4&quot; \
                priority=&quot;2&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.5&quot; \
priority=&quot;3&quot;/&gt;<br>  &lt;/failoverdomain&gt;<br>
            &lt;failoverdomain name=&quot;fd4&quot; nofailback=&quot;0&quot; \
ordered=&quot;1&quot;<br> restricted=&quot;1&quot;&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.1&quot; \
                priority=&quot;3&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.2&quot; \
                priority=&quot;4&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.3&quot; \
                priority=&quot;5&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.4&quot; \
                priority=&quot;1&quot;/&gt;<br>
                &lt;failoverdomainnode name=&quot;192.168.1.5&quot; \
priority=&quot;2&quot;/&gt;<br>  &lt;/failoverdomain&gt;<br>
        &lt;/failoverdomains&gt;<br>
        &lt;resources&gt;<br>
            &lt;ip address=&quot;10.1.1.1&quot; monitor_link=&quot;1&quot;/&gt;<br>
            &lt;ip address=&quot;10.1.1.2&quot; monitor_link=&quot;1&quot;/&gt;<br>
            &lt;ip address=&quot;10.1.1.3&quot; monitor_link=&quot;1&quot;/&gt;<br>
            &lt;ip address=&quot;10.1.1.4&quot; monitor_link=&quot;1&quot;/&gt;<br>
            &lt;ip address=&quot;10.1.1.5&quot; monitor_link=&quot;1&quot;/&gt;<br>
            &lt;script file=&quot;/usr/local/bin/service1&quot; \
                name=&quot;service1&quot;/&gt;<br>
            &lt;script file=&quot;/usr/local/bin/service2&quot; \
                name=&quot;service2&quot;/&gt;<br>
            &lt;script file=&quot;/usr/local/bin/service3&quot; \
                name=&quot;service3&quot;/&gt;<br>
            &lt;script file=&quot;/usr/local/bin/service4&quot; \
name=&quot;service4&quot;/&gt;<br>  &lt;/resources&gt;<br>
        &lt;service autostart=&quot;1&quot; domain=&quot;fd1&quot; \
exclusive=&quot;1&quot; name=&quot;service1&quot;<br> \
recovery=&quot;relocate&quot;&gt;<br>  &lt;ip ref=&quot;10.1.1.1&quot;/&gt;<br>
            &lt;script ref=&quot;service1&quot;/&gt;<br>
        &lt;/service&gt;<br>
        &lt;service autostart=&quot;1&quot; domain=&quot;fd2&quot; \
exclusive=&quot;1&quot; name=&quot;service2&quot;<br> \
recovery=&quot;relocate&quot;&gt;<br>  &lt;ip ref=&quot;10.1.1.2&quot;/&gt;<br>
            &lt;script ref=&quot;service2&quot;/&gt;<br>
        &lt;/service&gt;<br>
        &lt;service autostart=&quot;1&quot; domain=&quot;fd3&quot; \
exclusive=&quot;1&quot; name=&quot;service3&quot;<br> \
recovery=&quot;relocate&quot;&gt;<br>  &lt;ip ref=&quot;10.1.1.3&quot;/&gt;<br>
            &lt;script ref=&quot;service3&quot;/&gt;<br>
        &lt;/service&gt;<br>
        &lt;service autostart=&quot;1&quot; domain=&quot;fd4&quot; \
exclusive=&quot;1&quot; name=&quot;service4&quot;<br> \
recovery=&quot;relocate&quot;&gt;<br>  &lt;ip ref=&quot;10.1.1.4&quot;/&gt;<br>
            &lt;script ref=&quot;service4&quot;/&gt;<br>
        &lt;/service&gt;<br>
    &lt;/rm&gt;<br>
&lt;/cluster&gt;<br>
<br>
<br>
<br>
No virus found in this incoming message.<br>
Checked by AVG - <a href="http://www.avg.com" target="_blank">www.avg.com</a><br>
Version: 9.0.819 / Virus Database: 271.1.1/2874 - Release Date: 05/14/10<br>
20:26:00<br>
<br>
<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: &lt;<a href="https://www.redhat.com/archives/linux-cluster/attachments/20100515/4bc55bbe/attachment.html" \
target="_blank">https://www.redhat.com/archives/linux-cluster/attachments/20100515/4bc55bbe/attachment.html</a>&gt;<br>


<br>
------------------------------<br>
<font color="#888888"><br>
--<br>
Linux-cluster mailing list<br>
<a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/linux-cluster" \
target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br> <br>
End of Linux-cluster Digest, Vol 73, Issue 15<br>
*********************************************<br>
</font></blockquote></div><br><br clear="all"><br>-- <br>Warm Regards<br>Parshuram \
Prasad<br>+91-9560170372<br>Sr. System Administrator &amp; Database Administrator \
<br><br>Stratoshear Technology Pvt. Ltd.<br><br>BPS House Green Park -16<br> <a \
href="http://www.stratoshear.com">www.stratoshear.com</a><br> <br><br>



--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic