[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-ha
Subject:    Re: [Linux-HA] RE: Fencing prevents resource from failing over
From:       "Robert Heinzmann (ml)" <reg () elconas ! de>
Date:       2008-05-27 10:00:47
Message-ID: 483BDBCF.9060001 () elconas ! de
[Download RAW message or body]

Hi Dejan,

--

Nothing changed since then. At any rate, it is possible to define
more than one stonith resource in which case the cluster will try
them in the round-robin fashion until one succeeds. If one of
them is meatware, then it should be possible to tell the cluster
to proceed using meatclient. Note that in this case, i.e. if more
than one stonith resource is involved it may happen that the
meatware plugin is not active until the other times out (and vice
versa). So, the operator would have to run the meatclient
repeatedly until it succeeds or to watch the console of the DC
for the message:
--


thanks for this info - this is GREAT. I just tested it on my test system 
and it works perfectly.

--- CIB stonith part --

   <clone id="DoFencing">
     <instance_attributes id="0240f990-ec1f-48f5-b24d-ab9324afa535">
       <attributes>
         <nvpair name="clone_max" value="2" 
id="abf2d715-4ec1-4bb7-9181-1baaded779fb"/>
         <nvpair name="clone_node_max" value="1" 
id="cf44c2f0-1c2f-4736-a78e-bbe0381eb035"/>
       </attributes>
     </instance_attributes>
     <primitive id="child_DoFencing" class="stonith" 
type="external/fail" provider="heartbeat">
       <operations>
         <op name="monitor" interval="5s" timeout="20s" prereq="nothing" 
id="2e1b3b3d-e684-4d85-85bf
-621382abe49b"/>
         <op name="start" timeout="20s" prereq="nothing" 
id="481f565a-2f5e-438a-963b-cf43283aaa4b"/>
       </operations>
       <instance_attributes id="66353c41-02da-4d97-a2e4-9e107d3bf1d3">
         <attributes>
           <nvpair name="hostlist" value="node1,node2" 
id="1451a66e-4b21-4270-9ec6-b8ee9a618d3b"/>
           <nvpair name="ilo_hostname" value="klaus" 
id="f303d693-5663-46f0-936a-0c55e64a857b"/>
           <nvpair name="ilo_hostname" value="klaus" 
id="ba609627-2303-471a-b056-8f325c5a3cbf"/>
         </attributes>
       </instance_attributes>
     </primitive>
   </clone>
   <clone id="DoFencingManual">
     <instance_attributes>
       <attributes>
         <nvpair name="clone_max" value="2"/>
         <nvpair name="clone_node_max" value="1"/>
       </attributes>
     </instance_attributes>
     <primitive id="child_DoFencing_Manual" class="stonith" 
type="meatware" provider="heartbeat">
       <operations>
         <op name="monitor" interval="5s" timeout="20s" prereq="nothing"/>
         <op name="start" timeout="20s" prereq="nothing"/>
       </operations>
       <instance_attributes>
         <attributes>
           <nvpair name="hostlist" value="node1,node2"/>
         </attributes>
       </instance_attributes>
     </primitive>
  </clone>
---

If ilo fails (which it does ! always in my configuration above), I can 
still trigger the failover with the meatware plugin.

This is really helpful and it should be documented, because you must 
setup your cluster like this BEFORE the fire in datacenter 2 :)

P.s. FYI tomorrow there is a presentation about Heartbeat 2 at Linuxtag 
2008 Berlin at 10:00 at "Saal Paris (UG)".

Regards,
Robert



_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic