[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-ha
Subject:    Re: [Linux-HA] How to painlessly change depended upon resource groups?
From:       Arnold Krille <arnold () arnoldarts ! de>
Date:       2013-08-24 17:23:18
Message-ID: 20130824192318.0d7fff8e () xingu ! arnoldarts ! de
[Download RAW message or body]

[Attachment #2 (multipart/signed)]


Hi,

On Fri, 23 Aug 2013 10:41:21 +0200 Ferenc Wagner <wferi@niif.hu> wrote:
> Arnold Krille <arnold@arnoldarts.de> writes:
> > On a side-note: I made the (sad) experience that its easier to
> > configure such stuff outside of pacemaker/corosync and use the
> > cluster only for the reliable ha things.
> What do you mean by "reliable"?  What did you experience (if you can
> put it in a few sentences)?

I did use pacemaker to manage several things:
 1) drbd and vms and the dependencies between them. And the clones
 service of libvirt.
 2) drbd for a directory of configuration for "central services" like
 dhcp, named and an apache2 for some services. These where in one group
 depending on the drbd-master.

The second turned out to be less optimal than I thought: Everytime you
want to change something on the dhcp or named configuration, you first
have to check on which node its active. Then when you restart dhcp, it
has to be done through pacemaker. And you shouldn't be in the
shared-config directory with your shell as otherwise pacemaker might
decide to move the resource on the restart and fail because the
directory can't be unmounted and then fail the cluster and/or fence the
node. With desastrous results for the terminal-server-vm running on
that node affecting all the co-workers...
I already dropped the central apache2 and named and replaced them by an
instance configured by chef to be the same on all nodes. Additionally
the services aren't controlled by pacemaker anymore. So named is still
available when I shut down the cluster for maintainance. And being able
to run named on all three nodes is better then running it only on the
two nodes sharing the configuration-drbd.

Next thing to do is have dhcp configured by chef and take out of this
group. Then the group will be empty:-)

Additionally pacemaker gets slower in its behaviour the more resources
you have. And when its 10-15 virtual machines each with one or more
drbd-resources, well, currently its 63 resources pacemaker is
watching...

tl;dr: pacemaker is pretty cool at making sure the defined resources
are running. But the simplier your resources are, the better. One vm
depending on one or two drbd-masters is great. Synchronizing
configuration and managing complicated dependencies can be done with
pacemaker but there are better things to spend your time with.

> > Configuring several systems into a sane state is more a job for
> > configuration-management such as chef, puppet or at least csync2 (to
> > sync the configs).
> I'm not a big fan of configuration management systems, but they
> probably have their place.  None is present in the current setup,
> though, so setting one up for bridge configuration seemed more
> complicated than extending the cluster.  We'll see...

While automation has its advantage just by the fact that is automation,
what made it appealing for us is the repeatability. If the automation
has proven to work once in your setup, its easily portable to the
clients setup. Even more so if the initial prove of "working" was in
your test-setup and then re-used in your own production setup. And then
re-used on the clients network...

Take a look at Chef or Puppet or Ansible, its worth the time.

Have fun,

Arnold

PS: Sorry, it became it bit longer.

["signature.asc" (application/pgp-signature)]

_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic