[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-ha
Subject:    Re: [Linux-HA] Stupid newbie problem with haresources file
From:       "Mike Bernhardt" <bernhardt () bart ! gov>
Date:       2006-12-20 17:02:03
Message-ID: OF462CFBBB.5C5692F3-ON8825724A.005D92FF () bart ! gov
[Download RAW message or body]

Unfortunately that doesn't help. The problem is not how to use IPaddr, it's
that I can't get rid of the MailTo at the end of the line. If I do,
everything else breaks. I just added 2 more services, and they all work as
long as the line ends with the MailTo:: service. Otherwise they break!

Message: 2
Date: Tue, 19 Dec 2006 22:19:26 +0100
From: MIQUET Pascal <pascal.miquet@wanadoo.fr>
Subject: Re: [Linux-HA] Stupid newbie problem with haresources file
To: General Linux-HA mailing list <linux-ha@lists.linux-ha.org>
Message-ID: <1166563167.12397.1.camel@TR40-FE1LHO.miquet.org>
Content-Type: text/plain; charset=ISO-8859-15

I use IPaddr and set the haresource like this:

 IPaddr2::192.168.1.220/24/eth0/192.168.6.255

HTH

Pascal 

Le mardi 19 dicembre 2006 ` 12:48 -0800, Michael Bernhardt a icrit :
> My heartbeat needs are simple: I have 2 servers which share an IP provided
> by heartbeat, and I keep named running on the one with the active IP.
> Everything works fine when the last line in haresources is this:
> dns-pri.domain.com   192.168.0.1 named MailTo::me@domain.com
> 
> where dns-pri is the normally active server and named refers to the named
> startup script in /etc/init.d. I decided that I don't want to email myself
> anymore, so I simply removed MailTo::me@domain.com from the end of the
line.
> When I do that, heartbeat says that it can't find named, and therefore
> IpAddr doesn't take over either. If I remove named also, then IpAddr
> complains about address formatting and it doesn't work. If I put back the
> way it is above, then all is OK again.
> 
> What am I missing? Do I need some sort of line termination or what? Also,
> Can someone clarify the purpose of the dns-pri.domain.com at the beginning
> of that line? Is that the name of the group and can it be anything, or am
I
> correct that it is the name of the normally active server? I am not using
> auto-fallback.
> 
> Thanks!
> _______________________________________________
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems



------------------------------

Message: 3
Date: Wed, 20 Dec 2006 00:17:50 +0100
From: "Andreas Kurz" <andreas.kurz@gmail.com>
Subject: Re: [Fwd: [Linux-HA] repeatable failovers]
To: linux-ha@lists.linux-ha.org
Message-ID:
	<904050d50612191517l504214d8pd2daa831d248a8bb@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hello!

Some things I found out about the stickiness of resources .... and I
hope most of it is true ;-)

- resource-stickiness of INFINITY and the resource stays on the
current node, except the "start" operation fails e.g. after a failure
of the "monitor" operation. Even if
another node with a higher score would be online.

- resource-stickiness of -INFINITY and  the resource moves _always_
away from the current node, if another node comes online with a non
negative score for that resource ( so also default score 0 is
sufficient)

- if the monitor operation fails for a resource heartbeat tries to
restart it locally as long the "start" operation is successfull ( if
no resource-failure-stickiness is defined)

- if resource-failure-stickiness is defined for a resource the
fail-counter is increased and the score of the current node for that
resource is decreased by the resource-failure-stickiness -- for manual
resetting the fail-counter see http://www.linux-ha.org/v2/faq ...
search for "crm_failcount"

- a negative score for a resource enforces failover to another node
(with a positive score)

-if you really want your resource failover after every error to the
other node ... wheter this is a good idea or not ... and without a
manual reset of the fail-count, have a look at:
http://www.linux-ha.org/_cache/HeartbeatTutorials__LinuxKongress-2006-tutori
al.pdf
... search for "attrd_updater".

You could set your own score_attributes for your nodes depending on
the result of your special script. This should make it possible to
reduce the score for the current node whithout increasing the
fail-count (resource-failure-stickiness=0).

Regards,
Andi

> Hi
>
> On my 2-node cluster, I have one network service monitored by custom
> "watcher".
> For old heartbeat(v1), I used following configuration:
> - Watcher monitors process of network service. If watcher detects any
> problem with service, watcher will call hb_standby on current machine.
> - auto_failback settings is off. If no problem, resources shall remain
> on the same machine till eternity. If any problem, resources shall
> move to another machine.
>
> So the typical usage (when excluding hardware/network/OS problems):
> - resources are on machine A
> - after some time, watcher on A detects problem with service on A,
> calling hb_standby
> - resources are moved to machine B
> - after some time, watcher on B detects problem with service on B,
> calling hb_standby
> - resources are moved to machine A
> - after some time, watcher on A detects problem with service on A,
> calling hb_standby
> - resources are moved machine B
> ...
>
> I wanted to configure v2 heartbeat in the similar way. Instead of
> calling hb_standby, I wanted to use monitoring functionality of
> heartbeat.
> My OCF resource agent is called processResource. For monitor operation
> it behaves in following way:
> - network service is running correctly - return 0 to heartbeat
> - service was not started or was stopped - return 7
> - service is starting up - return 1
> - service is stopping - return 1
> - watcher detected problem with service - return 1 (in meantime,
> watcher is stopping service)
> Start operation of processResource lasts typically some milliseconds,
> stop operation lasts up to 8 seconds.
>
> To describe problem, let's the "A->B" means following failover:
> - resources are on machine A (processResource on A returns 0 to
> heartbeat's monitor operation)
> - after some time, watcher on A detects problem with service on A
> - processResource returns 1 to next heartbeat's monitor operation
> - heartbeat shall stop all resources on A
> - heartbeat shall start all resources on B
> - resources are on machine B (processResource on B returns 0 to
> heartbeat's monitor operation)
>
> Let's the "AxA" means following failover:
> - resources are on machine A (processResource on A returns 0 to
> heartbeat's monitor operation)
> - after some time, watcher on A detects problem with service on A
> - processResource returns 1 to next heartbeat's monitor operation
> - heartbeat restarts resource processResource on A
> - resources are on machine A (processResource on A returns 0 to
> heartbeat's monitor operation)
>
>
> So my problem is, that I was not able to configure heartbeat to do the
> following scenario:
> A->B->A->B->A->B ...
>
> According to
> http://www.linux-ha.org/v2/dtd1.0/annotated#default_resource_stickiness
> - I shall set default-resource-stickiness to INFINITY because of
> original auto_failback off. From
> http://www.linux-ha.org/v2/faq/forced_failover I understand that
> resources are moved to another machine immediately, if
> default_resource_failure_stickiness is low enough. Node score is zero,
> so I have default_resource_failure_stickiness -INFINITY.
> The result of this configuration is:
> - start heartbeat on A and B
> - resources are on A
> - A->B
> - after some time, watcher on B detects problem with service on B
> - on B, processResource returns 1 to next heartbeat's monitor operation
> - heartbeat stop all resources on B
> And now, no resources are running on cluster!
> (see attachment, name of A: debo, name of B: fico)
> In this state crm_verify -VL gives:
> crm_verify[22877]: 2006/12/17_08:34:50 WARN: unpack_rsc_op: Processing
> failed op (x_processResource_monitor_5000) for x_processResource on
> debo
> crm_verify[22877]: 2006/12/17_08:34:50 WARN: unpack_rsc_op: Processing
> failed op (x_processResource_monitor_5000) for x_processResource on
> fico
>
>
> I suspect fail counts on both nodes were set and only human can now
> start resources again. (I tried to find how to "deactivate" fail
> counts, but with no success). I tried many combinations of stickiness
> and failure stickiness values, but repeatable failovers were not
> possible.
>
> Just a remark, when abs(stickiness) >= abs(failure stickiness), the
> usual behavior was:
> AxAxAxAxA ... or BxBxBxBxB ...
> And it is again useless.
>
> In attachment, there are ha.cf, logs, cibadmin -Ql outputs.
> debo machine: Linux, Debian sarge
> configure options: --with-group-name=haclient
> --with-ccmuser-name=hacluster --sysconfdir=/etc --localstatedir=/var
> --disable-tipc --disable-ldirectord --disable-snmp
> --enable-bundled_ltdl --enable-ltdl-convenience --disable-mgmt
> --disable-quorumd --disable-fatal-warnings --enable-crm-dev CFLAGS='-g
> -O0 -fno-unit-at-a-time'
> fico machine: Linux, Gentoo
> configure options: --with-group-name=cluster
> --with-ccmuser-name=cluster --with-group-id=65 --with-ccmuser-id=65
> --sysconfdir=/etc --localstatedir=/var --disable-tipc
> --disable-ldirectord --disable-snmp --enable-bundled_ltdl
> --enable-ltdl-convenience --disable-mgmt --disable-quorumd
> --disable-fatal-warnings --enable-crm-dev CFLAGS='-g -O0
> -fno-unit-at-a-time'
> Sources of heartbeat were taken from http://hg.linux-ha.org/dev
> changeset 9857.
>
>
> If you have any ideas how to get heartbeat to work in
> "A->B->A->B->A->B way", please let me know. Any help appreciated.
>
> Palo


------------------------------

Message: 4
Date: Tue, 19 Dec 2006 22:58:03 -0600
From: prashant n <massoo@30gigs.com>
Subject: Re: Re: [Linux-HA] heartbeat + IPaddr Resource is stopped
To: General Linux-HA mailing list <linux-ha@lists.linux-ha.org>
Message-ID: <026afe1ac75638c5b074f002b644488a@imap.30gigs.com>
Content-Type: text/plain; charset="ISO-8859-1"

hi,

as I said I get the IPaddr Resource is stopped message whenever i start /
restart the HA.

Prashant

On Fri, 15 Dec 2006 15:38:04 +0100, "Andrew Beekhof" <beekhof@gmail.com>
wrote:
> what is the problem here?
> 
> On 12/14/06, prashant n <massoo@30gigs.com> wrote:
>> hi,
>>
>> I have a 2 node HA test setup in which i have the following
> configuration :
>> O.S : CentOS 4.4 x86_64
>> drbd : kernel-module-drbd-2.6.9-42.EL-0.7.21-1.c4 & drbd-0.7.21-1.c4
>> heartbeat : heartbeat-2.0.7-1.c4
>> 2 NICs :
>> on node1 eth0=10.1.1.46, eth1=172.16.1.155
>> on node2 eth0=10.1.1.56, eth1=172.16.1.56
>> 1 Serial-Port-Cable and 1 RJ45 Cross-Over-Cable(connected to both nodes
> on eth0) is connecting both the nodes
>>
>> All the above was installed from the official rpms from CentOS releases.
>> I have configured the drbd successfully and it is running properly
>> When I try to do the sanity check as : sh
> /usr/lib64/heartbeat/BasicSanityCheck , I get the following output :
>>
>
----------------------------------------------------------------------------
------------------
>> Using interface: eth1
>> Starting base64 and md5 algorithm tests
>> base64 and md5 algorithm tests succeeded.
>> Starting heartbeat
>> Starting High-Availability services:
>> 2006/12/13_22:33:07 INFO: IPaddr Resource is stopped
>>                                                            [  OK  ]
>> Reloading heartbeat
>> Reloading heartbeat
>> Stopping heartbeat
>> Stopping High-Availability services:
>>                                                            [  OK  ]
>> Checking STONITH basic sanity.
>> Performing apphbd success case tests
>> Performing apphbd failure case tests
>> Starting IPC tests
>> Starting LRM tests
>> Starting heartbeat
>> Starting High-Availability services:
>> 2006/12/13_22:34:25 INFO: IPaddr Resource is stopped
>>                                                            [  OK  ]
>> starting STONITH Daemon tests
>> STONITH Daemon tests passed.
>> Stopping heartbeat
>> Stopping High-Availability services:
>>                                                            [  OK  ]
>> Starting CRM tests
>> CRM tests passed.
>> Starting heartbeat
>> Starting High-Availability services:
>>                                                            [  OK  ]
>> starting SNMP Agent tests
>> SNMP Agent tests failed.
>> Stopping heartbeat
>> Stopping High-Availability services:
>>                                                            [  OK  ]
>> 1 errors. Log file is stored in /tmp/linux-ha.testlog
>>
>
----------------------------------------------------------------------------
------------------
>> My /etc/ha.d/ha.cf is
>>
>
----------------------------------------------------------------------------
------------
>> logfacility local0     # syslog facility
>> debugfile /var/log/ha-debug    # ha-debug logs
>> logfile /var/log/ha-log     # ha info logs
>> keepalive 1                                             # Send one
> heartbeat each second
>> warntime 2                                              # late HB
>> deadtime 10                                             # Declare nodes
> dead after 10 seconds
>> node drdb1.mobiapps.com drdb2.mobiapps.com              # List our
> cluster members
>> ping 172.16.1.1                                         # Ping our
> router to monitor ethernet connectivity
>> bcast eth0 eth1                                         # Broadcast
> heartbeats on eth0 and eth1 interfaces
>> baud 19200                                              # baud rate
>> serial /dev/ttyS0                                       # HB serial link
>> auto_failback no                                 # Don't fail back to
> drbd1.mobiapps.com automatically
>>
>
----------------------------------------------------------------------------
-----------------------
>> My /etc/ha.d/haresources is
>>
>
----------------------------------------------------------------------------
--------------------------
>> drdb1.mobiapps.com IPaddr2::172.16.1.50 drbddisk::r0
> Filesystem::/dev/drbd0::/data::ext3 apache
>>
>
----------------------------------------------------------------------------
-------------------------
>> If any one wants the complete log I can attach the same.
>>
>> Please guide me howto resolve this IPaddr Resource is stopped issue
>>
>> Thanks & Regards
>> Prashant
>>
>>
>> -----------------------------------------------------------
>> Sign up and get your 30GB webmail at www.30gigs.com now!
>> _______________________________________________
>> Linux-HA mailing list
>> Linux-HA@lists.linux-ha.org
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> See also: http://linux-ha.org/ReportingProblems
>>
> _______________________________________________
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems

-----------------------------------------------------------
Sign up and get your 30GB webmail at www.30gigs.com now!

------------------------------

Message: 5
Date: Wed, 20 Dec 2006 09:15:00 -0000
From: "Magnus Brown" <mbrown@nexagent.com>
Subject: [Linux-HA] Ungraceful shutdown problem
To: <linux-ha@lists.linux-ha.org>
Message-ID:
	
<B2B4D3618441D941B811329A672FD64E02B14D17@THHS2EXBE2X.hostedservice2.net>
	
Content-Type: text/plain;	charset="US-ASCII"

Hi all,

Sorry I forgot I had unsubscribed from the list before sending this
email so it will go to a moderator first.

I have some more info though.

I have tried removing the eth1 connection (as opposed to the eth0 one
which gives the problem) and the resources all remain running on their
respective nodes as they should - hurrah. So the problem only occurs
when I remove the eth0 connection.

Another problem is that when the eth0 connection is restored and the
node which went down ungracefully has heartbeat restarted it starts both
ldap and weblogic as predicted (well actually it finds that ldap and
weblogic are already running), but if the other node where fuego is
running is shutdown gracefully, fuego is not moved over to the other
node. So I have a situation where a resource required to run is not
running in the cluster, nor does the previously failed node try to
restart it.

I thought that maybe the failure count had been set for fuego so tried
to check it with: -

crm_failcount -V -G -U edlapp02.eds.lcms.com -r fuego_res
name=fail-count-fuego_res value=(null) Error performing operation: The
object/attribute does not exist

If I try and reset the failure count with: -

Crm_failcount -D -U edlapp02.eds.lcms.com -r fuego_res

It has no effect. In order to get fuego to run on this previously failed
node I have to stop heartbeat, remove the following: -

rm -f /var/lib/heartbeat/cores/root/*
rm -f /var/lib/heartbeat/cores/nobody/*
rm -f /var/lib/heartbeat/cores/hacluster/*
rm -f /var/lib/heartbeat/hb_generation
rm -f /var/lib/heartbeat/hb_uuid
rm -f /var/lib/heartbeat/hostcache
rm -f /var/lib/heartbeat/pengine/*
rm -f /var/lib/heartbeat/crm/cib.xml.last
rm -f /var/lib/heartbeat/crm/cib.xml.sig
rm -f /var/lib/heartbeat/crm/cib.xml.sig.last

and copy back the initial cib.xml I used to start with.

If I could get the same behaviour with eth0 as eth1 I would be happy as
fuego fails to run correctly without eth0 and so is failed over
correctly when eth0 is down. I just need to stop heartbeat shuttind
itself down when eth0 is taken down,

Thank you
Magnus

-----Original Message-----
From: Magnus Brown
Sent: 19 December 2006 12:39
To: 'linux-ha@lists.linux-ha.org'
Subject: Ungraceful shutdown problem

Hi all,

I have a problem with heartbeat shutting down ungracefully and leaving
managed processes still running. I have attached the cib.xml and a
zipped ha-log.

I have 2 nodes which are connected via 2 lan connections. The ha.cf is
shown below: -

use_logd on
udpport 694
keepalive 1
deadtime 45
mcast eth0 239.192.0.1 694 1 0
mcast eth1 239.192.0.2 694 1 0
node edlapp01.eds.lcms.com edlapp02.eds.lcms.com crm yes

When I pull the eth0 cable on edlapp01, fuego is successfully moved to
edlapp02. I am then expecting ldap and weblogic to continue running on
edlapp01 but I get the following messages in ha-log: -

tengine[31320]: 2006/12/18_11:59:13 info: te_update_diff:callbacks.c
Processing diff (cib_update): 0.51.7185 -> 0.51.7186
tengine[31320]: 2006/12/18_11:59:13 info: match_graph_event:events.c
Action fuego_res_stop_0 (2) confirmed
tengine[31320]: 2006/12/18_11:59:13 info: te_pseudo_action:actions.c
Pseudo action 31 confirmed
tengine[31320]: 2006/12/18_11:59:13 info: te_pseudo_action:actions.c
Pseudo action 28 confirmed
tengine[31320]: 2006/12/18_11:59:13 info: send_rsc_command:actions.c
Initiating action 26: fuego_res_start_0 on edlapp02.eds.lcms.com
cib[2433]: 2006/12/18_11:59:13 info: write_cib_contents:io.c Wrote
version 0.51.7186 of the CIB to disk (digest:
3e4b41e1e8ce2f632e64696ae11c8b9d)
heartbeat[31250]: 2006/12/18_12:00:52 ERROR: Cannot write to media pipe
0: Resource temporarily unavailable
heartbeat[31250]: 2006/12/18_12:00:52 ERROR: Shutting down.
heartbeat[31250]: 2006/12/18_12:00:52 ERROR: Cannot write to media pipe
0: Resource temporarily unavailable
heartbeat[31250]: 2006/12/18_12:00:52 ERROR: Shutting down.
heartbeat[31250]: 2006/12/18_12:00:52 ERROR: Cannot write to media pipe
0: Resource temporarily unavailable
heartbeat[31250]: 2006/12/18_12:00:52 ERROR: Shutting down.
heartbeat[31250]: 2006/12/18_12:00:52 ERROR: Cannot write to media pipe
0: Resource temporarily unavailable
heartbeat[31250]: 2006/12/18_12:00:52 ERROR: Shutting down.
heartbeat[31250]: 2006/12/18_12:00:52 ERROR: Cannot write to media pipe
0: Resource temporarily unavailable
heartbeat[31250]: 2006/12/18_12:00:52 ERROR: Shutting down.
heartbeat[31250]: 2006/12/18_12:00:52 ERROR: Cannot write to media pipe
0: Resource temporarily unavailable
heartbeat[31250]: 2006/12/18_12:00:52 ERROR: Shutting down.
heartbeat[31250]: 2006/12/18_12:00:52 ERROR: Cannot write to media pipe
0: Resource temporarily unavailable
heartbeat[31250]: 2006/12/18_12:00:52 ERROR: Shutting down.

These cannot write and shutting down messages continue until: -

heartbeat[31250]: 2006/12/18_12:01:04 ERROR: Shutting down.
heartbeat[31250]: 2006/12/18_12:01:04 ERROR: Message hist queue is
filling up (200 messages in queue)
ccm[31309]: 2006/12/18_12:01:04 ERROR: Lost connection to heartbeat
service. Need to bail out.
cib[31310]: 2006/12/18_12:01:04 ERROR: cib_ha_connection_destroy:main.c
Heartbeat connection lost!  Exiting.
stonithd[31312]: 2006/12/18_12:01:04 ERROR: Disconnected with heartbeat
daemon
mgmtd[31315]: 2006/12/18_12:01:04 ERROR: Lost connection to heartbeat
service.
tengine[31320]: 2006/12/18_12:01:04 ERROR: stonithd_op_result_ready:
failed due to not on signon status.
cib[31310]: 2006/12/18_12:01:04 info: uninitializeCib:io.c The CIB has
been deallocated.
attrd[31313]: 2006/12/18_12:01:04 CRIT: attrd_ha_dispatch:attrd.c Lost
connection to heartbeat service.
stonithd[31312]: 2006/12/18_12:01:04 notice: /usr/lib/heartbeat/stonithd
normally quit.
crmd[31314]: 2006/12/18_12:01:04 CRIT: crmd_ha_msg_dispatch:callbacks.c
Lost connection to heartbeat service.
mgmtd[31315]: 2006/12/18_12:01:04 ERROR:
cib_native_msgready:cib_native.c Message pending on command channel
[31310]
tengine[31320]: 2006/12/18_12:01:04 ERROR:
tengine_stonith_connection_destroy:callbacks.c Fencing daemon has left
us
pengine[31321]: 2006/12/18_12:01:04 info: pengine_shutdown:main.c
Exiting PEngine (SIGTERM)
attrd[31313]: 2006/12/18_12:01:04 CRIT:
attrd_ha_connection_destroy:attrd.c Lost connection to heartbeat
service!
crmd[31314]: 2006/12/18_12:01:04 info: mem_handle_func:IPC broken, ccm
is dead before the client!

And then: -



crmd[31314]: 2006/12/18_12:01:13 info: verify_stopped:lrm.c Checking for
active resources before exit
crmd[31314]: 2006/12/18_12:01:13 ERROR: verify_stopped:lrm.c Resource
ldap_res:0 was active at shutdown.  You may ignore this error if it is
unmanaged.
crmd[31314]: 2006/12/18_12:01:13 ERROR: verify_stopped:lrm.c Resource
weblogic_res:0 was active at shutdown.  You may ignore this error if it
is unmanaged.
crmd[31314]: 2006/12/18_12:01:13 info: verify_stopped:lrm.c Checking for
active resources before exit
crmd[31314]: 2006/12/18_12:01:13 ERROR: verify_stopped:lrm.c Resource
ldap_res:0 was active at shutdown.  You may ignore this error if it is
unmanaged.
crmd[31314]: 2006/12/18_12:01:13 ERROR: verify_stopped:lrm.c Resource
weblogic_res:0 was active at shutdown.  You may ignore this error if it
is unmanaged.
crmd[31314]: 2006/12/18_12:01:13 info: do_lrm_control:lrm.c Disconnected
from the LRM
crmd[31314]: 2006/12/18_12:01:13 info: do_ha_control:control.c
Disconnected from Heartbeat
crmd[31314]: 2006/12/18_12:01:13 info: do_cib_control:cib.c
Disconnecting CIB
crmd[31314]: 2006/12/18_12:01:13 ERROR: send_ipc_message:ipc.c IPC
Channel to 31310 is not connected

and then heartbeat shuts down leaving ldap and weblogic running.

I would like heartbeat to shutdown gracefully and shut ldap and weblogic
down if it is going to go down, but even better would it to not shutdown
at all as only fuego should move over to the other node.

I'm guessing that there is some config I can add to the cib.xml file to
achieve this but I am unable to work out what it is,

Many thanks

Magnus


______________________________________________________________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
______________________________________________________________________


------------------------------

_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

End of Linux-HA Digest, Vol 37, Issue 36
****************************************



_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic