[prev in list] [next in list] [prev in thread] [next in thread] 

List:       fedora-extras-commits
Subject:    jpokorny pushed to pacemaker (f23).  "1.1.14-1: new release, spec refresh, crm_mon X curses fix (..m
From:       notifications () fedoraproject ! org
Date:       2016-03-31 23:09:08
Message-ID: 20160331230908.18C2C608DDAA () bastion01 ! phx2 ! fedoraproject ! org
[Download RAW message or body]

From ec5191b256e646a61332b064fe547ed1f4b20385 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Jan Pokorný?= <jpokorny@redhat.com>
Date: Mon, 18 Jan 2016 18:49:03 +0100
Subject: 1.1.14-1: new release, spec refresh, crm_mon X curses fix

Resolves: rhbz#1297985
---
 .gitignore                                         |    2 +-
 ...ource-Correctly-check-if-a-resource-is-un.patch |   82 -
 ...cl-5247-Imply-resources-running-on-a-cont.patch |  328 -
 ...rrectly-set-time-from-seconds-since-epoch.patch |   21 -
 ...-cl-5247-Imply-resources-running-on-a-con.patch | 1419 ----
 0008-Fix-tools-memory-leak-in-crm_resource.patch   |   33 -
 ...-The-failed-action-of-the-resource-that-o.patch |   31 -
 ...ces-Reduce-severity-of-noisy-log-messages.patch |   34 -
 ...k-xml-nodes-as-dirty-if-any-children-move.patch |   24 -
 ...md-Implement-reliable-event-notifications.patch |  565 --
 0013-Fix-cman-Suppress-implied-node-names.patch    |   47 -
 ...oose-more-appropriate-names-for-notificat.patch |   58 -
 ...md-Correctly-enable-disable-notifications.patch |   22 -
 ...port-the-completion-status-and-output-of-.patch |  109 -
 ...Print-the-nodeid-of-nodes-with-fake-names.patch |   23 -
 ...ols-Isolate-the-paths-which-truely-requir.patch |  299 -
 ...c-Display-node-state-and-quorum-data-if-a.patch |   94 -
 ...erd-Do-not-forget-about-nodes-that-leave-.patch |   23 -
 ...pacemakerd-Track-node-state-in-pacemakerd.patch |   58 -
 0022-Fix-PE-Resolve-memory-leak.patch              |   27 -
 ...cman-Purge-all-node-caches-for-crm_node-R.patch |   24 -
 ...mbership-Safely-autoreap-nodes-without-co.patch |   92 -
 ...event-segfault-by-correctly-detecting-whe.patch |   23 -
 ...n-t-add-node-ID-to-proxied-remote-node-re.patch |   29 -
 ...er_remote-memory-leak-in-ipc_proxy_dispat.patch |   35 -
 ...g-The-package-version-is-more-informative.patch |  115 -
 ...ource-Allow-the-resource-configuration-to.patch |  127 -
 ...proved-logging-when-no-pacemaker-remote-a.patch |   34 -
 ...-don-t-print-error-if-remote-key-environm.patch |   38 -
 ...epair-the-logging-of-interesting-command-.patch |  182 -
 ...Tools-Do-not-send-command-lines-to-syslog.patch |   46 -
 ...g-cibadmin-Default-once-again-to-LOG_CRIT.patch |   21 -
 ...ource-Correctly-update-existing-meta-attr.patch |   87 -
 ...ource-restart-Improved-user-feedback-on-f.patch |   27 -
 ...ource-Correctly-delete-existing-meta-attr.patch |  179 -
 ...ource-Correctly-observe-force-when-deleti.patch |   75 -
 bz1297985-fix-configure-curses-test.patch          |   23 +
 pacemaker-63f8e9a-rollup.patch                     | 5904 ---------------
 pacemaker-rollup-3a7715d.patch                     | 4919 ------------
 pacemaker-rollup-7-1-3d781d3.patch                 | 7989 --------------------
 pacemaker.spec                                     |  172 +-
 sources                                            |    2 +-
 42 files changed, 122 insertions(+), 23320 deletions(-)
 delete mode 100644 0004-Fix-crm_resource-Correctly-check-if-a-resource-is-un.patch
 delete mode 100644 0005-Fix-PE-Bug-cl-5247-Imply-resources-running-on-a-cont.patch
 delete mode 100644 0006-Fix-Date-Correctly-set-time-from-seconds-since-epoch.patch
 delete mode 100644 0007-Test-PE-Bug-cl-5247-Imply-resources-running-on-a-con.patch
 delete mode 100644 0008-Fix-tools-memory-leak-in-crm_resource.patch
 delete mode 100644 0009-Fix-pengine-The-failed-action-of-the-resource-that-o.patch
 delete mode 100644 0010-Log-services-Reduce-severity-of-noisy-log-messages.patch
 delete mode 100644 0011-Fix-xml-Mark-xml-nodes-as-dirty-if-any-children-move.patch
 delete mode 100644 0012-Feature-crmd-Implement-reliable-event-notifications.patch
 delete mode 100644 0013-Fix-cman-Suppress-implied-node-names.patch
 delete mode 100644 0014-Fix-crmd-Choose-more-appropriate-names-for-notificat.patch
 delete mode 100644 0015-Fix-crmd-Correctly-enable-disable-notifications.patch
 delete mode 100644 0016-Fix-crmd-Report-the-completion-status-and-output-of-.patch
 delete mode 100644 0017-Fix-cman-Print-the-nodeid-of-nodes-with-fake-names.patch
 delete mode 100644 0018-Refactor-Tools-Isolate-the-paths-which-truely-requir.patch
 delete mode 100644 0019-Fix-corosync-Display-node-state-and-quorum-data-if-a.patch
 delete mode 100644 0020-Fix-pacemakerd-Do-not-forget-about-nodes-that-leave-.patch
 delete mode 100644 0021-Fix-pacemakerd-Track-node-state-in-pacemakerd.patch
 delete mode 100644 0022-Fix-PE-Resolve-memory-leak.patch
 delete mode 100644 0023-Fix-cman-Purge-all-node-caches-for-crm_node-R.patch
 delete mode 100644 0024-Refactor-membership-Safely-autoreap-nodes-without-co.patch
 delete mode 100644 0025-Fix-crmd-Prevent-segfault-by-correctly-detecting-whe.patch
 delete mode 100644 0026-Fix-crmd-don-t-add-node-ID-to-proxied-remote-node-re.patch
 delete mode 100644 0027-Fix-pacemaker_remote-memory-leak-in-ipc_proxy_dispat.patch
 delete mode 100644 0028-Log-The-package-version-is-more-informative.patch
 delete mode 100644 0029-Fix-crm_resource-Allow-the-resource-configuration-to.patch
 delete mode 100644 0030-Log-lrmd-Improved-logging-when-no-pacemaker-remote-a.patch
 delete mode 100644 0031-Fix-liblrmd-don-t-print-error-if-remote-key-environm.patch
 delete mode 100644 0032-Fix-Tools-Repair-the-logging-of-interesting-command-.patch
 delete mode 100644 0033-Feature-Tools-Do-not-send-command-lines-to-syslog.patch
 delete mode 100644 0034-Log-cibadmin-Default-once-again-to-LOG_CRIT.patch
 delete mode 100644 0035-Fix-crm_resource-Correctly-update-existing-meta-attr.patch
 delete mode 100644 0036-Log-crm_resource-restart-Improved-user-feedback-on-f.patch
 delete mode 100644 0037-Fix-crm_resource-Correctly-delete-existing-meta-attr.patch
 delete mode 100644 0038-Fix-crm_resource-Correctly-observe-force-when-deleti.patch
 create mode 100644 bz1297985-fix-configure-curses-test.patch
 delete mode 100644 pacemaker-63f8e9a-rollup.patch
 delete mode 100644 pacemaker-rollup-3a7715d.patch
 delete mode 100644 pacemaker-rollup-7-1-3d781d3.patch

diff --git a/.gitignore b/.gitignore
index 1e3c89c..3ddb934 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,3 +1,3 @@
 /ClusterLabs-pacemaker-*.tar.gz
-/pacemaker-*.tar.gz
+/[Pp]acemaker-*.tar.gz
 /nagios-agents-metadata-*.tar.gz
diff --git a/0004-Fix-crm_resource-Correctly-check-if-a-resource-is-un.patch \
b/0004-Fix-crm_resource-Correctly-check-if-a-resource-is-un.patch deleted file mode \
100644 index 1ef6a11..0000000
--- a/0004-Fix-crm_resource-Correctly-check-if-a-resource-is-un.patch
+++ /dev/null
@@ -1,82 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Fri, 14 Aug 2015 09:43:32 +1000
-Subject: [PATCH] Fix: crm_resource: Correctly check if a resource is unmanaged
- or has a target-role
-
-(cherry picked from commit 3ff29dbe2cab872b452c4580736d23d1f69736fa)
----
- tools/crm_resource.c         |  2 +-
- tools/crm_resource_runtime.c | 31 ++++++++++++++++++-------------
- 2 files changed, 19 insertions(+), 14 deletions(-)
-
-diff --git a/tools/crm_resource.c b/tools/crm_resource.c
-index 2fce3b7..156bbea 100644
---- a/tools/crm_resource.c
-+++ b/tools/crm_resource.c
-@@ -888,7 +888,7 @@ main(int argc, char **argv)
-             rsc = uber_parent(rsc);
-         }
-
--        crm_debug("Re-checking the state of %s on %s", rsc_id, host_uname);
-+        crm_debug("Re-checking the state of %s for %s on %s", rsc->id, rsc_id, \
                host_uname);
-         if(rsc) {
-             crmd_replies_needed = 0;
-             rc = cli_resource_delete(cib_conn, crmd_channel, host_uname, rsc, \
                &data_set);
-diff --git a/tools/crm_resource_runtime.c b/tools/crm_resource_runtime.c
-index a270cbf..f260e19 100644
---- a/tools/crm_resource_runtime.c
-+++ b/tools/crm_resource_runtime.c
-@@ -616,35 +616,40 @@ cli_resource_delete(cib_t *cib_conn, crm_ipc_t * crmd_channel, \
                const char *host_
- void
- cli_resource_check(cib_t * cib_conn, resource_t *rsc)
- {
--
-+    int need_nl = 0;
-     char *role_s = NULL;
-     char *managed = NULL;
-     resource_t *parent = uber_parent(rsc);
-
--    find_resource_attr(cib_conn, XML_ATTR_ID, parent->id,
--                       XML_TAG_META_SETS, NULL, NULL, XML_RSC_ATTR_MANAGED, \
                &managed);
-+    find_resource_attr(cib_conn, XML_NVPAIR_ATTR_VALUE, parent->id,
-+                       NULL, NULL, NULL, XML_RSC_ATTR_MANAGED, &managed);
-
--    find_resource_attr(cib_conn, XML_ATTR_ID, parent->id,
--                       XML_TAG_META_SETS, NULL, NULL, XML_RSC_ATTR_TARGET_ROLE, \
                &role_s);
-+    find_resource_attr(cib_conn, XML_NVPAIR_ATTR_VALUE, parent->id,
-+                       NULL, NULL, NULL, XML_RSC_ATTR_TARGET_ROLE, &role_s);
-
--    if(managed == NULL) {
--        managed = strdup("1");
--    }
--    if(crm_is_true(managed) == FALSE) {
--        printf("\n\t*Resource %s is configured to not be managed by the cluster\n", \
                parent->id);
--    }
-     if(role_s) {
-         enum rsc_role_e role = text2role(role_s);
-         if(role == RSC_ROLE_UNKNOWN) {
-             // Treated as if unset
-
-         } else if(role == RSC_ROLE_STOPPED) {
--            printf("\n\t* The configuration specifies that '%s' should remain \
                stopped\n", parent->id);
-+            printf("\n  * The configuration specifies that '%s' should remain \
                stopped\n", parent->id);
-+            need_nl++;
-
-         } else if(parent->variant > pe_clone && role != RSC_ROLE_MASTER) {
--            printf("\n\t* The configuration specifies that '%s' should not be \
                promoted\n", parent->id);
-+            printf("\n  * The configuration specifies that '%s' should not be \
                promoted\n", parent->id);
-+            need_nl++;
-         }
-     }
-+
-+    if(managed && crm_is_true(managed) == FALSE) {
-+        printf("%s  * The configuration prevents the cluster from stopping or \
                starting '%s' (unmanaged)\n", need_nl == 0?"\n":"", parent->id);
-+        need_nl++;
-+    }
-+
-+    if(need_nl) {
-+        printf("\n");
-+    }
- }
-
- int
diff --git a/0005-Fix-PE-Bug-cl-5247-Imply-resources-running-on-a-cont.patch \
b/0005-Fix-PE-Bug-cl-5247-Imply-resources-running-on-a-cont.patch deleted file mode \
100644 index cf19707..0000000
--- a/0005-Fix-PE-Bug-cl-5247-Imply-resources-running-on-a-cont.patch
+++ /dev/null
@@ -1,328 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Tue, 18 Aug 2015 10:30:49 +1000
-Subject: [PATCH] Fix: PE: Bug cl#5247 - Imply resources running on a container
- are stopped when the container is stopped
-
-(cherry picked from commit e10eff1902d5b451454e2d467ee337c964f536ab)
----
- lib/pengine/unpack.c                  | 29 ++++++++++++++++++++---------
- pengine/allocate.c                    | 17 +++++++++++++++++
- pengine/graph.c                       |  7 ++++++-
- pengine/test10/bug-rh-1097457.dot     |  2 ++
- pengine/test10/bug-rh-1097457.exp     | 12 ++++++++++--
- pengine/test10/bug-rh-1097457.summary | 10 +++++-----
- pengine/test10/whitebox-fail1.dot     |  1 +
- pengine/test10/whitebox-fail1.exp     |  6 +++++-
- pengine/test10/whitebox-fail1.summary |  8 ++++----
- pengine/test10/whitebox-fail2.dot     |  1 +
- pengine/test10/whitebox-fail2.exp     |  6 +++++-
- pengine/test10/whitebox-fail2.summary |  8 ++++----
- 12 files changed, 80 insertions(+), 27 deletions(-)
-
-diff --git a/lib/pengine/unpack.c b/lib/pengine/unpack.c
-index 106c674..0f83be4 100644
---- a/lib/pengine/unpack.c
-+++ b/lib/pengine/unpack.c
-@@ -44,7 +44,7 @@ CRM_TRACE_INIT_DATA(pe_status);
-
- gboolean unpack_rsc_op(resource_t * rsc, node_t * node, xmlNode * xml_op,
-                        enum action_fail_response *failed, pe_working_set_t * \
                data_set);
--static gboolean determine_remote_online_status(node_t * this_node);
-+static gboolean determine_remote_online_status(pe_working_set_t * data_set, node_t \
                * this_node);
-
- static gboolean
- is_dangling_container_remote_node(node_t *node)
-@@ -73,6 +73,8 @@ pe_fence_node(pe_working_set_t * data_set, node_t * node, const \
                char *reason)
-         if (is_set(rsc->flags, pe_rsc_failed) == FALSE) {
-             crm_warn("Remote node %s will be fenced by recovering container \
                resource %s",
-                 node->details->uname, rsc->id, reason);
-+            /* node->details->unclean = TRUE; */
-+            node->details->remote_requires_reset = TRUE;
-             set_bit(rsc->flags, pe_rsc_failed);
-         }
-     } else if (is_dangling_container_remote_node(node)) {
-@@ -1157,7 +1159,7 @@ unpack_remote_status(xmlNode * status, pe_working_set_t * \
                data_set)
-         if ((this_node == NULL) || (is_remote_node(this_node) == FALSE)) {
-             continue;
-         }
--        determine_remote_online_status(this_node);
-+        determine_remote_online_status(data_set, this_node);
-     }
-
-     /* process attributes */
-@@ -1366,7 +1368,7 @@ determine_online_status_fencing(pe_working_set_t * data_set, \
                xmlNode * node_stat
- }
-
- static gboolean
--determine_remote_online_status(node_t * this_node)
-+determine_remote_online_status(pe_working_set_t * data_set, node_t * this_node)
- {
-     resource_t *rsc = this_node->details->remote_rsc;
-     resource_t *container = NULL;
-@@ -1393,13 +1395,21 @@ determine_remote_online_status(node_t * this_node)
-     }
-
-     /* Now check all the failure conditions. */
--    if (is_set(rsc->flags, pe_rsc_failed) ||
--        (rsc->role == RSC_ROLE_STOPPED) ||
--        (container && is_set(container->flags, pe_rsc_failed)) ||
--        (container && container->role == RSC_ROLE_STOPPED)) {
-+    if(container && is_set(container->flags, pe_rsc_failed)) {
-+        crm_trace("Remote node %s is set to UNCLEAN. rsc failed.", \
                this_node->details->id);
-+        this_node->details->online = FALSE;
-+        this_node->details->remote_requires_reset = TRUE;
-
--        crm_trace("Remote node %s is set to OFFLINE. node is stopped or rsc \
                failed.", this_node->details->id);
-+    } else if(is_set(rsc->flags, pe_rsc_failed)) {
-+        crm_trace("Remote node %s is set to OFFLINE. rsc failed.", \
                this_node->details->id);
-         this_node->details->online = FALSE;
-+
-+    } else if (rsc->role == RSC_ROLE_STOPPED
-+        || (container && container->role == RSC_ROLE_STOPPED)) {
-+
-+        crm_trace("Remote node %s is set to OFFLINE. node is stopped.", \
                this_node->details->id);
-+        this_node->details->online = FALSE;
-+        this_node->details->remote_requires_reset = FALSE;
-     }
-
- remote_online_done:
-@@ -3375,7 +3385,8 @@ find_operations(const char *rsc, const char *node, gboolean \
                active_filter,
-                 continue;
-
-             } else if (is_remote_node(this_node)) {
--                determine_remote_online_status(this_node);
-+                determine_remote_online_status(data_set, this_node);
-+
-             } else {
-                 determine_online_status(node_state, this_node, data_set);
-             }
-diff --git a/pengine/allocate.c b/pengine/allocate.c
-index c2e56f9..65ae05d 100644
---- a/pengine/allocate.c
-+++ b/pengine/allocate.c
-@@ -1406,6 +1406,23 @@ stage6(pe_working_set_t * data_set)
-
-         /* remote-nodes associated with a container resource (such as a vm) are not \
                fenced */
-         if (is_container_remote_node(node)) {
-+            /* Guest */
-+            if (need_stonith
-+                && node->details->remote_requires_reset
-+                && pe_can_fence(data_set, node)) {
-+                resource_t *container = node->details->remote_rsc->container;
-+                char *key = stop_key(container);
-+                GListPtr stop_list = find_actions(container->actions, key, NULL);
-+
-+                crm_info("Impliying node %s is down when container %s is stopped \
                (%p)",
-+                         node->details->uname, container->id, stop_list);
-+                if(stop_list) {
-+                    stonith_constraints(node, stop_list->data, data_set);
-+                }
-+
-+                g_list_free(stop_list);
-+                free(key);
-+            }
-             continue;
-         }
-
-diff --git a/pengine/graph.c b/pengine/graph.c
-index 3d832f0..a50f15b 100644
---- a/pengine/graph.c
-+++ b/pengine/graph.c
-@@ -697,7 +697,12 @@ stonith_constraints(node_t * node, action_t * stonith_op, \
                pe_working_set_t * dat
-         for (lpc = data_set->resources; lpc != NULL; lpc = lpc->next) {
-             resource_t *rsc = (resource_t *) lpc->data;
-
--            rsc_stonith_ordering(rsc, stonith_op, data_set);
-+            if(stonith_op->rsc == NULL) {
-+                rsc_stonith_ordering(rsc, stonith_op, data_set);
-+
-+            } else if(stonith_op->rsc != rsc && stonith_op->rsc != rsc->container) \
                {
-+                rsc_stonith_ordering(rsc, stonith_op, data_set);
-+            }
-         }
-     }
-
-diff --git a/pengine/test10/bug-rh-1097457.dot b/pengine/test10/bug-rh-1097457.dot
-index 666099c..078d177 100644
---- a/pengine/test10/bug-rh-1097457.dot
-+++ b/pengine/test10/bug-rh-1097457.dot
-@@ -49,10 +49,12 @@ digraph "g" {
- "VM2_start_0 lama3" [ style=bold color="green" fontcolor="black"]
- "VM2_stop_0 lama3" -> "FAKE4-IP_stop_0 lamaVM2" [ style = bold]
- "VM2_stop_0 lama3" -> "FAKE4_stop_0 lamaVM2" [ style = bold]
-+"VM2_stop_0 lama3" -> "FAKE6-clone_stop_0" [ style = bold]
- "VM2_stop_0 lama3" -> "FAKE6_stop_0 lamaVM2" [ style = bold]
- "VM2_stop_0 lama3" -> "FSlun3_stop_0 lamaVM2" [ style = bold]
- "VM2_stop_0 lama3" -> "VM2_start_0 lama3" [ style = bold]
- "VM2_stop_0 lama3" -> "all_stopped" [ style = bold]
-+"VM2_stop_0 lama3" -> "lamaVM2-G4_stop_0" [ style = bold]
- "VM2_stop_0 lama3" [ style=bold color="green" fontcolor="black"]
- "all_stopped" [ style=bold color="green" fontcolor="orange"]
- "lamaVM2-G4_running_0" [ style=bold color="green" fontcolor="orange"]
-diff --git a/pengine/test10/bug-rh-1097457.exp b/pengine/test10/bug-rh-1097457.exp
-index 36af9f3..175f413 100644
---- a/pengine/test10/bug-rh-1097457.exp
-+++ b/pengine/test10/bug-rh-1097457.exp
-@@ -119,7 +119,11 @@
-         <attributes CRM_meta_timeout="20000" />
-       </pseudo_event>
-     </action_set>
--    <inputs/>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="40" operation="stop" operation_key="VM2_stop_0" on_node="lama3" \
                on_node_uuid="2"/>
-+      </trigger>
-+    </inputs>
-   </synapse>
-   <synapse id="9">
-     <action_set>
-@@ -331,7 +335,11 @@
-         <attributes CRM_meta_clone_max="3" CRM_meta_clone_node_max="1" \
                CRM_meta_globally_unique="false" CRM_meta_notify="false" \
                CRM_meta_timeout="20000" />
-       </pseudo_event>
-     </action_set>
--    <inputs/>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="40" operation="stop" operation_key="VM2_stop_0" on_node="lama3" \
                on_node_uuid="2"/>
-+      </trigger>
-+    </inputs>
-   </synapse>
-   <synapse id="22" priority="1000000">
-     <action_set>
-diff --git a/pengine/test10/bug-rh-1097457.summary \
                b/pengine/test10/bug-rh-1097457.summary
-index e2f235d..c8751ae 100644
---- a/pengine/test10/bug-rh-1097457.summary
-+++ b/pengine/test10/bug-rh-1097457.summary
-@@ -39,17 +39,17 @@ Transition Summary:
-  * Restart lamaVM2	(Started lama3)
-
- Executing cluster transition:
-- * Pseudo action:   lamaVM2-G4_stop_0
-- * Pseudo action:   FAKE6-clone_stop_0
-  * Resource action: lamaVM2         stop on lama3
-  * Resource action: VM2             stop on lama3
-+ * Pseudo action:   lamaVM2-G4_stop_0
-  * Pseudo action:   FAKE4-IP_stop_0
-- * Pseudo action:   FAKE6_stop_0
-- * Pseudo action:   FAKE6-clone_stopped_0
-- * Pseudo action:   FAKE6-clone_start_0
-+ * Pseudo action:   FAKE6-clone_stop_0
-  * Resource action: VM2             start on lama3
-  * Resource action: VM2             monitor000 on lama3
-  * Pseudo action:   FAKE4_stop_0
-+ * Pseudo action:   FAKE6_stop_0
-+ * Pseudo action:   FAKE6-clone_stopped_0
-+ * Pseudo action:   FAKE6-clone_start_0
-  * Resource action: lamaVM2         start on lama3
-  * Resource action: lamaVM2         monitor0000 on lama3
-  * Resource action: FSlun3          monitor000 on lamaVM2
-diff --git a/pengine/test10/whitebox-fail1.dot b/pengine/test10/whitebox-fail1.dot
-index b595015..0f0fe26 100644
---- a/pengine/test10/whitebox-fail1.dot
-+++ b/pengine/test10/whitebox-fail1.dot
-@@ -26,6 +26,7 @@ digraph "g" {
- "container1_start_0 18node2" -> "lxc1_start_0 18node2" [ style = bold]
- "container1_start_0 18node2" [ style=bold color="green" fontcolor="black"]
- "container1_stop_0 18node2" -> "B_stop_0 lxc1" [ style = bold]
-+"container1_stop_0 18node2" -> "M-clone_stop_0" [ style = bold]
- "container1_stop_0 18node2" -> "M_stop_0 lxc1" [ style = bold]
- "container1_stop_0 18node2" -> "all_stopped" [ style = bold]
- "container1_stop_0 18node2" -> "container1_start_0 18node2" [ style = bold]
-diff --git a/pengine/test10/whitebox-fail1.exp b/pengine/test10/whitebox-fail1.exp
-index 834b231..01bb142 100644
---- a/pengine/test10/whitebox-fail1.exp
-+++ b/pengine/test10/whitebox-fail1.exp
-@@ -96,7 +96,11 @@
-         <attributes CRM_meta_clone_max="5" CRM_meta_clone_node_max="1" \
                CRM_meta_globally_unique="false" CRM_meta_notify="false" \
                CRM_meta_timeout="20000" />
-       </pseudo_event>
-     </action_set>
--    <inputs/>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="5" operation="stop" operation_key="container1_stop_0" \
                on_node="18node2" on_node_uuid="2"/>
-+      </trigger>
-+    </inputs>
-   </synapse>
-   <synapse id="7" priority="1000000">
-     <action_set>
-diff --git a/pengine/test10/whitebox-fail1.summary \
                b/pengine/test10/whitebox-fail1.summary
-index 5e5887b..1586407 100644
---- a/pengine/test10/whitebox-fail1.summary
-+++ b/pengine/test10/whitebox-fail1.summary
-@@ -20,17 +20,17 @@ Transition Summary:
-  * Restart lxc1	(Started 18node2)
-
- Executing cluster transition:
-- * Pseudo action:   M-clone_stop_0
-  * Resource action: lxc1            stop on 18node2
-  * Resource action: container1      stop on 18node2
-+ * Pseudo action:   M-clone_stop_0
-+ * Pseudo action:   B_stop_0
-+ * Resource action: container1      start on 18node2
-  * Pseudo action:   M_stop_0
-  * Pseudo action:   M-clone_stopped_0
-  * Pseudo action:   M-clone_start_0
-- * Pseudo action:   B_stop_0
-- * Pseudo action:   all_stopped
-- * Resource action: container1      start on 18node2
-  * Resource action: lxc1            start on 18node2
-  * Resource action: lxc1            monitor0000 on 18node2
-+ * Pseudo action:   all_stopped
-  * Resource action: M               start on lxc1
-  * Pseudo action:   M-clone_running_0
-  * Resource action: B               start on lxc1
-diff --git a/pengine/test10/whitebox-fail2.dot b/pengine/test10/whitebox-fail2.dot
-index b595015..0f0fe26 100644
---- a/pengine/test10/whitebox-fail2.dot
-+++ b/pengine/test10/whitebox-fail2.dot
-@@ -26,6 +26,7 @@ digraph "g" {
- "container1_start_0 18node2" -> "lxc1_start_0 18node2" [ style = bold]
- "container1_start_0 18node2" [ style=bold color="green" fontcolor="black"]
- "container1_stop_0 18node2" -> "B_stop_0 lxc1" [ style = bold]
-+"container1_stop_0 18node2" -> "M-clone_stop_0" [ style = bold]
- "container1_stop_0 18node2" -> "M_stop_0 lxc1" [ style = bold]
- "container1_stop_0 18node2" -> "all_stopped" [ style = bold]
- "container1_stop_0 18node2" -> "container1_start_0 18node2" [ style = bold]
-diff --git a/pengine/test10/whitebox-fail2.exp b/pengine/test10/whitebox-fail2.exp
-index 834b231..01bb142 100644
---- a/pengine/test10/whitebox-fail2.exp
-+++ b/pengine/test10/whitebox-fail2.exp
-@@ -96,7 +96,11 @@
-         <attributes CRM_meta_clone_max="5" CRM_meta_clone_node_max="1" \
                CRM_meta_globally_unique="false" CRM_meta_notify="false" \
                CRM_meta_timeout="20000" />
-       </pseudo_event>
-     </action_set>
--    <inputs/>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="5" operation="stop" operation_key="container1_stop_0" \
                on_node="18node2" on_node_uuid="2"/>
-+      </trigger>
-+    </inputs>
-   </synapse>
-   <synapse id="7" priority="1000000">
-     <action_set>
-diff --git a/pengine/test10/whitebox-fail2.summary \
                b/pengine/test10/whitebox-fail2.summary
-index 338173d..ab40d99 100644
---- a/pengine/test10/whitebox-fail2.summary
-+++ b/pengine/test10/whitebox-fail2.summary
-@@ -20,17 +20,17 @@ Transition Summary:
-  * Recover lxc1	(Started 18node2)
-
- Executing cluster transition:
-- * Pseudo action:   M-clone_stop_0
-  * Resource action: lxc1            stop on 18node2
-  * Resource action: container1      stop on 18node2
-+ * Pseudo action:   M-clone_stop_0
-+ * Pseudo action:   B_stop_0
-+ * Resource action: container1      start on 18node2
-  * Pseudo action:   M_stop_0
-  * Pseudo action:   M-clone_stopped_0
-  * Pseudo action:   M-clone_start_0
-- * Pseudo action:   B_stop_0
-- * Pseudo action:   all_stopped
-- * Resource action: container1      start on 18node2
-  * Resource action: lxc1            start on 18node2
-  * Resource action: lxc1            monitor0000 on 18node2
-+ * Pseudo action:   all_stopped
-  * Resource action: M               start on lxc1
-  * Pseudo action:   M-clone_running_0
-  * Resource action: B               start on lxc1
diff --git a/0006-Fix-Date-Correctly-set-time-from-seconds-since-epoch.patch \
b/0006-Fix-Date-Correctly-set-time-from-seconds-since-epoch.patch deleted file mode \
100644 index ea40f7e..0000000
--- a/0006-Fix-Date-Correctly-set-time-from-seconds-since-epoch.patch
+++ /dev/null
@@ -1,21 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Tue, 18 Aug 2015 11:06:13 +1000
-Subject: [PATCH] Fix: Date: Correctly set time from seconds-since-epoch
-
-(cherry picked from commit efa318114d0b2124cc82fe143403e6de502e0134)
----
- lib/common/iso8601.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/lib/common/iso8601.c b/lib/common/iso8601.c
-index 769e01b..5f4a73d 100644
---- a/lib/common/iso8601.c
-+++ b/lib/common/iso8601.c
-@@ -1011,6 +1011,7 @@ ha_set_tm_time(crm_time_t * target, struct tm *source)
-         target->days = 1 + source->tm_yday;
-     }
-
-+    target->seconds = 0;
-     if (source->tm_hour >= 0) {
-         target->seconds += 60 * 60 * source->tm_hour;
-     }
diff --git a/0007-Test-PE-Bug-cl-5247-Imply-resources-running-on-a-con.patch \
b/0007-Test-PE-Bug-cl-5247-Imply-resources-running-on-a-con.patch deleted file mode \
100644 index 74aa4b1..0000000
--- a/0007-Test-PE-Bug-cl-5247-Imply-resources-running-on-a-con.patch
+++ /dev/null
@@ -1,1419 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Tue, 18 Aug 2015 10:31:06 +1000
-Subject: [PATCH] Test: PE: Bug cl#5247 - Imply resources running on a
- container are stopped when the container is stopped
-
-(cherry picked from commit 825e82a5098bde0412944c7d4f54c3d825ddff08)
----
- pengine/regression.sh              |  29 +-
- pengine/test10/bug-cl-5247.dot     | 136 +++++++
- pengine/test10/bug-cl-5247.exp     | 704 +++++++++++++++++++++++++++++++++++++
- pengine/test10/bug-cl-5247.scores  |  84 +++++
- pengine/test10/bug-cl-5247.summary |  96 +++++
- pengine/test10/bug-cl-5247.xml     | 295 ++++++++++++++++
- 6 files changed, 1331 insertions(+), 13 deletions(-)
- create mode 100644 pengine/test10/bug-cl-5247.dot
- create mode 100644 pengine/test10/bug-cl-5247.exp
- create mode 100644 pengine/test10/bug-cl-5247.scores
- create mode 100644 pengine/test10/bug-cl-5247.summary
- create mode 100644 pengine/test10/bug-cl-5247.xml
-
-diff --git a/pengine/regression.sh b/pengine/regression.sh
-index 7f73f92..1517e3d 100755
---- a/pengine/regression.sh
-+++ b/pengine/regression.sh
-@@ -31,19 +31,6 @@ info Performing the following tests from $io_dir
- create_mode="false"
-
- echo ""
--do_test cloned_start_one  "order first clone then clone... first clone_min=2"
--do_test cloned_start_two  "order first clone then clone... first clone_min=2"
--do_test cloned_stop_one   "order first clone then clone... first clone_min=2"
--do_test cloned_stop_two   "order first clone then clone... first clone_min=2"
--do_test clone_min_interleave_start_one "order first clone then clone... first \
                clone_min=2 and then has interleave=true"
--do_test clone_min_interleave_start_two "order first clone then clone... first \
                clone_min=2 and then has interleave=true"
--do_test clone_min_interleave_stop_one  "order first clone then clone... first \
                clone_min=2 and then has interleave=true"
--do_test clone_min_interleave_stop_two  "order first clone then clone... first \
                clone_min=2 and then has interleave=true"
--do_test clone_min_start_one "order first clone then primitive... first clone_min=2"
--do_test clone_min_start_two "order first clone then primitive... first clone_min=2"
--do_test clone_min_stop_all  "order first clone then primitive... first clone_min=2"
--do_test clone_min_stop_one  "order first clone then primitive... first clone_min=2"
--do_test clone_min_stop_two  "order first clone then primitive... first clone_min=2"
-
- do_test simple1 "Offline     "
- do_test simple2 "Start       "
-@@ -373,6 +360,21 @@ do_test clone-interleave-2 "Clone-3 must stop on pcmk-1 due to \
                interleaved order
- do_test clone-interleave-3 "Clone-3 must be recovered on pcmk-1 due to interleaved \
                ordering (no colocation)"
-
- echo ""
-+do_test cloned_start_one  "order first clone then clone... first clone_min=2"
-+do_test cloned_start_two  "order first clone then clone... first clone_min=2"
-+do_test cloned_stop_one   "order first clone then clone... first clone_min=2"
-+do_test cloned_stop_two   "order first clone then clone... first clone_min=2"
-+do_test clone_min_interleave_start_one "order first clone then clone... first \
                clone_min=2 and then has interleave=true"
-+do_test clone_min_interleave_start_two "order first clone then clone... first \
                clone_min=2 and then has interleave=true"
-+do_test clone_min_interleave_stop_one  "order first clone then clone... first \
                clone_min=2 and then has interleave=true"
-+do_test clone_min_interleave_stop_two  "order first clone then clone... first \
                clone_min=2 and then has interleave=true"
-+do_test clone_min_start_one "order first clone then primitive... first clone_min=2"
-+do_test clone_min_start_two "order first clone then primitive... first clone_min=2"
-+do_test clone_min_stop_all  "order first clone then primitive... first clone_min=2"
-+do_test clone_min_stop_one  "order first clone then primitive... first clone_min=2"
-+do_test clone_min_stop_two  "order first clone then primitive... first clone_min=2"
-+
-+echo ""
- do_test unfence-startup "Clean unfencing"
- do_test unfence-definition "Unfencing when the agent changes"
- do_test unfence-parameters "Unfencing when the agent parameters changes"
-@@ -785,6 +787,7 @@ do_test container-group-3 "Container in group - stop failed"
- do_test container-group-4 "Container in group - reached migration-threshold"
- do_test container-is-remote-node "Place resource within container when container is \
                remote-node"
- do_test bug-rh-1097457 "Kill user defined container/contents ordering"
-+do_test bug-cl-5247 "Graph loop when recovering m/s resource in a container"
-
- echo ""
- do_test whitebox-fail1 "Fail whitebox container rsc."
-diff --git a/pengine/test10/bug-cl-5247.dot b/pengine/test10/bug-cl-5247.dot
-new file mode 100644
-index 0000000..ed728ac
---- /dev/null
-+++ b/pengine/test10/bug-cl-5247.dot
-@@ -0,0 +1,136 @@
-+digraph "g" {
-+"all_stopped" [ style=bold color="green" fontcolor="orange"]
-+"grpStonith1_running_0" [ style=bold color="green" fontcolor="orange"]
-+"grpStonith1_start_0" -> "grpStonith1_running_0" [ style = bold]
-+"grpStonith1_start_0" -> "prmStonith1-2_start_0 bl460g8n4" [ style = bold]
-+"grpStonith1_start_0" [ style=bold color="green" fontcolor="orange"]
-+"grpStonith1_stop_0" -> "grpStonith1_stopped_0" [ style = bold]
-+"grpStonith1_stop_0" -> "prmStonith1-2_stop_0 bl460g8n4" [ style = bold]
-+"grpStonith1_stop_0" [ style=bold color="green" fontcolor="orange"]
-+"grpStonith1_stopped_0" -> "grpStonith1_start_0" [ style = bold]
-+"grpStonith1_stopped_0" [ style=bold color="green" fontcolor="orange"]
-+"grpStonith2_running_0" [ style=bold color="green" fontcolor="orange"]
-+"grpStonith2_start_0" -> "grpStonith2_running_0" [ style = bold]
-+"grpStonith2_start_0" -> "prmStonith2-2_start_0 bl460g8n3" [ style = bold]
-+"grpStonith2_start_0" [ style=bold color="green" fontcolor="orange"]
-+"grpStonith2_stop_0" -> "grpStonith2_stopped_0" [ style = bold]
-+"grpStonith2_stop_0" -> "prmStonith2-2_stop_0 bl460g8n3" [ style = bold]
-+"grpStonith2_stop_0" [ style=bold color="green" fontcolor="orange"]
-+"grpStonith2_stopped_0" -> "grpStonith2_start_0" [ style = bold]
-+"grpStonith2_stopped_0" [ style=bold color="green" fontcolor="orange"]
-+"master-group_running_0" [ style=bold color="green" fontcolor="orange"]
-+"master-group_start_0" -> "master-group_running_0" [ style = bold]
-+"master-group_start_0" -> "vip-master_start_0 pgsr01" [ style = bold]
-+"master-group_start_0" -> "vip-rep_start_0 pgsr01" [ style = bold]
-+"master-group_start_0" [ style=bold color="green" fontcolor="orange"]
-+"master-group_stop_0" -> "master-group_stopped_0" [ style = bold]
-+"master-group_stop_0" -> "vip-master_stop_0 pgsr02" [ style = bold]
-+"master-group_stop_0" -> "vip-rep_stop_0 pgsr02" [ style = bold]
-+"master-group_stop_0" [ style=bold color="green" fontcolor="orange"]
-+"master-group_stopped_0" -> "master-group_start_0" [ style = bold]
-+"master-group_stopped_0" [ style=bold color="green" fontcolor="orange"]
-+"msPostgresql_confirmed-post_notify_demoted_0" -> "master-group_stop_0" [ style = \
                bold]
-+"msPostgresql_confirmed-post_notify_demoted_0" -> "msPostgresql_pre_notify_stop_0" \
                [ style = bold]
-+"msPostgresql_confirmed-post_notify_demoted_0" -> "pgsql_monitor_9000 pgsr01" [ \
                style = bold]
-+"msPostgresql_confirmed-post_notify_demoted_0" [ style=bold color="green" \
                fontcolor="orange"]
-+"msPostgresql_confirmed-post_notify_stopped_0" -> "all_stopped" [ style = bold]
-+"msPostgresql_confirmed-post_notify_stopped_0" -> "pgsql_monitor_9000 pgsr01" [ \
                style = bold]
-+"msPostgresql_confirmed-post_notify_stopped_0" [ style=bold color="green" \
                fontcolor="orange"]
-+"msPostgresql_confirmed-pre_notify_demote_0" -> "msPostgresql_demote_0" [ style = \
                bold]
-+"msPostgresql_confirmed-pre_notify_demote_0" -> \
                "msPostgresql_post_notify_demoted_0" [ style = bold]
-+"msPostgresql_confirmed-pre_notify_demote_0" [ style=bold color="green" \
                fontcolor="orange"]
-+"msPostgresql_confirmed-pre_notify_stop_0" -> "msPostgresql_post_notify_stopped_0" \
                [ style = bold]
-+"msPostgresql_confirmed-pre_notify_stop_0" -> "msPostgresql_stop_0" [ style = bold]
-+"msPostgresql_confirmed-pre_notify_stop_0" [ style=bold color="green" \
                fontcolor="orange"]
-+"msPostgresql_demote_0" -> "msPostgresql_demoted_0" [ style = bold]
-+"msPostgresql_demote_0" -> "pgsql_demote_0 pgsr02" [ style = bold]
-+"msPostgresql_demote_0" [ style=bold color="green" fontcolor="orange"]
-+"msPostgresql_demoted_0" -> "msPostgresql_post_notify_demoted_0" [ style = bold]
-+"msPostgresql_demoted_0" -> "msPostgresql_stop_0" [ style = bold]
-+"msPostgresql_demoted_0" [ style=bold color="green" fontcolor="orange"]
-+"msPostgresql_post_notify_demoted_0" -> \
                "msPostgresql_confirmed-post_notify_demoted_0" [ style = bold]
-+"msPostgresql_post_notify_demoted_0" -> "pgsql_post_notify_demoted_0 pgsr01" [ \
                style = bold]
-+"msPostgresql_post_notify_demoted_0" [ style=bold color="green" fontcolor="orange"]
-+"msPostgresql_post_notify_stopped_0" -> \
                "msPostgresql_confirmed-post_notify_stopped_0" [ style = bold]
-+"msPostgresql_post_notify_stopped_0" -> "pgsql_post_notify_stop_0 pgsr01" [ style = \
                bold]
-+"msPostgresql_post_notify_stopped_0" [ style=bold color="green" fontcolor="orange"]
-+"msPostgresql_pre_notify_demote_0" -> "msPostgresql_confirmed-pre_notify_demote_0" \
                [ style = bold]
-+"msPostgresql_pre_notify_demote_0" -> "pgsql_pre_notify_demote_0 pgsr01" [ style = \
                bold]
-+"msPostgresql_pre_notify_demote_0" [ style=bold color="green" fontcolor="orange"]
-+"msPostgresql_pre_notify_stop_0" -> "msPostgresql_confirmed-pre_notify_stop_0" [ \
                style = bold]
-+"msPostgresql_pre_notify_stop_0" -> "pgsql_pre_notify_stop_0 pgsr01" [ style = \
                bold]
-+"msPostgresql_pre_notify_stop_0" [ style=bold color="green" fontcolor="orange"]
-+"msPostgresql_stop_0" -> "msPostgresql_stopped_0" [ style = bold]
-+"msPostgresql_stop_0" -> "pgsql_stop_0 pgsr02" [ style = bold]
-+"msPostgresql_stop_0" [ style=bold color="green" fontcolor="orange"]
-+"msPostgresql_stopped_0" -> "msPostgresql_post_notify_stopped_0" [ style = bold]
-+"msPostgresql_stopped_0" [ style=bold color="green" fontcolor="orange"]
-+"pgsql_confirmed-post_notify_stop_0" -> "all_stopped" [ style = bold]
-+"pgsql_confirmed-post_notify_stop_0" -> "pgsql_monitor_9000 pgsr01" [ style = bold]
-+"pgsql_confirmed-post_notify_stop_0" [ style=bold color="green" fontcolor="orange"]
-+"pgsql_demote_0 pgsr02" -> "msPostgresql_demoted_0" [ style = bold]
-+"pgsql_demote_0 pgsr02" -> "pgsql_stop_0 pgsr02" [ style = bold]
-+"pgsql_demote_0 pgsr02" [ style=bold color="green" fontcolor="orange"]
-+"pgsql_monitor_9000 pgsr01" [ style=bold color="green" fontcolor="black"]
-+"pgsql_post_notify_demoted_0 pgsr01" -> \
                "msPostgresql_confirmed-post_notify_demoted_0" [ style = bold]
-+"pgsql_post_notify_demoted_0 pgsr01" [ style=bold color="green" fontcolor="black"]
-+"pgsql_post_notify_stop_0 pgsr01" -> "msPostgresql_confirmed-post_notify_stopped_0" \
                [ style = bold]
-+"pgsql_post_notify_stop_0 pgsr01" -> "pgsql_confirmed-post_notify_stop_0" [ style = \
                bold]
-+"pgsql_post_notify_stop_0 pgsr01" [ style=bold color="green" fontcolor="black"]
-+"pgsql_post_notify_stop_0" -> "pgsql_confirmed-post_notify_stop_0" [ style = bold]
-+"pgsql_post_notify_stop_0" -> "pgsql_post_notify_stop_0 pgsr01" [ style = bold]
-+"pgsql_post_notify_stop_0" [ style=bold color="green" fontcolor="orange"]
-+"pgsql_pre_notify_demote_0 pgsr01" -> "msPostgresql_confirmed-pre_notify_demote_0" \
                [ style = bold]
-+"pgsql_pre_notify_demote_0 pgsr01" [ style=bold color="green" fontcolor="black"]
-+"pgsql_pre_notify_stop_0 pgsr01" -> "msPostgresql_confirmed-pre_notify_stop_0" [ \
                style = bold]
-+"pgsql_pre_notify_stop_0 pgsr01" [ style=bold color="green" fontcolor="black"]
-+"pgsql_stop_0 pgsr02" -> "all_stopped" [ style = bold]
-+"pgsql_stop_0 pgsr02" -> "msPostgresql_stopped_0" [ style = bold]
-+"pgsql_stop_0 pgsr02" [ style=bold color="green" fontcolor="orange"]
-+"pgsr02_stop_0 bl460g8n4" -> "all_stopped" [ style = bold]
-+"pgsr02_stop_0 bl460g8n4" -> "prmDB2_stop_0 bl460g8n4" [ style = bold]
-+"pgsr02_stop_0 bl460g8n4" [ style=bold color="green" fontcolor="black"]
-+"prmDB2_stop_0 bl460g8n4" -> "all_stopped" [ style = bold]
-+"prmDB2_stop_0 bl460g8n4" -> "master-group_stop_0" [ style = bold]
-+"prmDB2_stop_0 bl460g8n4" -> "msPostgresql_stop_0" [ style = bold]
-+"prmDB2_stop_0 bl460g8n4" -> "pgsql_demote_0 pgsr02" [ style = bold]
-+"prmDB2_stop_0 bl460g8n4" -> "pgsql_post_notify_stop_0" [ style = bold]
-+"prmDB2_stop_0 bl460g8n4" -> "pgsql_stop_0 pgsr02" [ style = bold]
-+"prmDB2_stop_0 bl460g8n4" -> "vip-master_stop_0 pgsr02" [ style = bold]
-+"prmDB2_stop_0 bl460g8n4" -> "vip-rep_stop_0 pgsr02" [ style = bold]
-+"prmDB2_stop_0 bl460g8n4" [ style=bold color="green" fontcolor="black"]
-+"prmStonith1-2_monitor_3600000 bl460g8n4" [ style=bold color="green" \
                fontcolor="black"]
-+"prmStonith1-2_start_0 bl460g8n4" -> "grpStonith1_running_0" [ style = bold]
-+"prmStonith1-2_start_0 bl460g8n4" -> "prmStonith1-2_monitor_3600000 bl460g8n4" [ \
                style = bold]
-+"prmStonith1-2_start_0 bl460g8n4" [ style=bold color="green" fontcolor="black"]
-+"prmStonith1-2_stop_0 bl460g8n4" -> "all_stopped" [ style = bold]
-+"prmStonith1-2_stop_0 bl460g8n4" -> "grpStonith1_stopped_0" [ style = bold]
-+"prmStonith1-2_stop_0 bl460g8n4" -> "prmStonith1-2_start_0 bl460g8n4" [ style = \
                bold]
-+"prmStonith1-2_stop_0 bl460g8n4" [ style=bold color="green" fontcolor="orange"]
-+"prmStonith2-2_monitor_3600000 bl460g8n3" [ style=bold color="green" \
                fontcolor="black"]
-+"prmStonith2-2_start_0 bl460g8n3" -> "grpStonith2_running_0" [ style = bold]
-+"prmStonith2-2_start_0 bl460g8n3" -> "prmStonith2-2_monitor_3600000 bl460g8n3" [ \
                style = bold]
-+"prmStonith2-2_start_0 bl460g8n3" [ style=bold color="green" fontcolor="black"]
-+"prmStonith2-2_stop_0 bl460g8n3" -> "all_stopped" [ style = bold]
-+"prmStonith2-2_stop_0 bl460g8n3" -> "grpStonith2_stopped_0" [ style = bold]
-+"prmStonith2-2_stop_0 bl460g8n3" -> "prmStonith2-2_start_0 bl460g8n3" [ style = \
                bold]
-+"prmStonith2-2_stop_0 bl460g8n3" [ style=bold color="green" fontcolor="black"]
-+"vip-master_monitor_10000 pgsr01" [ style=bold color="green" fontcolor="black"]
-+"vip-master_start_0 pgsr01" -> "master-group_running_0" [ style = bold]
-+"vip-master_start_0 pgsr01" -> "vip-master_monitor_10000 pgsr01" [ style = bold]
-+"vip-master_start_0 pgsr01" -> "vip-rep_start_0 pgsr01" [ style = bold]
-+"vip-master_start_0 pgsr01" [ style=bold color="green" fontcolor="black"]
-+"vip-master_stop_0 pgsr02" -> "all_stopped" [ style = bold]
-+"vip-master_stop_0 pgsr02" -> "master-group_stopped_0" [ style = bold]
-+"vip-master_stop_0 pgsr02" -> "vip-master_start_0 pgsr01" [ style = bold]
-+"vip-master_stop_0 pgsr02" [ style=bold color="green" fontcolor="orange"]
-+"vip-rep_monitor_10000 pgsr01" [ style=bold color="green" fontcolor="black"]
-+"vip-rep_start_0 pgsr01" -> "master-group_running_0" [ style = bold]
-+"vip-rep_start_0 pgsr01" -> "vip-rep_monitor_10000 pgsr01" [ style = bold]
-+"vip-rep_start_0 pgsr01" [ style=bold color="green" fontcolor="black"]
-+"vip-rep_stop_0 pgsr02" -> "all_stopped" [ style = bold]
-+"vip-rep_stop_0 pgsr02" -> "master-group_stopped_0" [ style = bold]
-+"vip-rep_stop_0 pgsr02" -> "vip-master_stop_0 pgsr02" [ style = bold]
-+"vip-rep_stop_0 pgsr02" -> "vip-rep_start_0 pgsr01" [ style = bold]
-+"vip-rep_stop_0 pgsr02" [ style=bold color="green" fontcolor="orange"]
-+}
-diff --git a/pengine/test10/bug-cl-5247.exp b/pengine/test10/bug-cl-5247.exp
-new file mode 100644
-index 0000000..5e36e84
---- /dev/null
-+++ b/pengine/test10/bug-cl-5247.exp
-@@ -0,0 +1,704 @@
-+<transition_graph cluster-delay="60s" stonith-timeout="60s" \
                failed-stop-offset="INFINITY" failed-start-offset="INFINITY"  \
                transition_id="0">
-+  <synapse id="0">
-+    <action_set>
-+      <rsc_op id="4" operation="stop" operation_key="prmDB2_stop_0" \
                on_node="bl460g8n4" on_node_uuid="3232261400">
-+        <primitive id="prmDB2" class="ocf" provider="heartbeat" \
                type="VirtualDomain"/>
-+        <attributes CRM_meta_name="stop" CRM_meta_on_fail="fence" \
CRM_meta_remote_node="pgsr02" CRM_meta_timeout="120000" \
config="/etc/libvirt/qemu/pgsr02.xml"  hypervisor="qemu:///system" \
                migration_transport="ssh"/>
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="70" operation="stop" operation_key="pgsr02_stop_0" \
                on_node="bl460g8n4" on_node_uuid="3232261400"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="1">
-+    <action_set>
-+      <pseudo_event id="21" operation="stopped" \
                operation_key="grpStonith1_stopped_0">
-+        <attributes CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="17" operation="stop" \
                operation_key="prmStonith1-2_stop_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="20" operation="stop" operation_key="grpStonith1_stop_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="2">
-+    <action_set>
-+      <pseudo_event id="20" operation="stop" operation_key="grpStonith1_stop_0">
-+        <attributes CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs/>
-+  </synapse>
-+  <synapse id="3">
-+    <action_set>
-+      <pseudo_event id="19" operation="running" \
                operation_key="grpStonith1_running_0">
-+        <attributes CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="9" operation="start" operation_key="prmStonith1-2_start_0" \
                on_node="bl460g8n4" on_node_uuid="3232261400"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="18" operation="start" \
                operation_key="grpStonith1_start_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="4">
-+    <action_set>
-+      <pseudo_event id="18" operation="start" operation_key="grpStonith1_start_0">
-+        <attributes CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="21" operation="stopped" \
                operation_key="grpStonith1_stopped_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="5">
-+    <action_set>
-+      <pseudo_event id="17" operation="stop" operation_key="prmStonith1-2_stop_0">
-+        <attributes CRM_meta_name="stop" CRM_meta_on_fail="ignore" \
CRM_meta_timeout="60000"  hostname="bl460g8n3" interface="lanplus" \
                ipaddr="192.168.28.43" passwd="****" pcmk_reboot_timeout="60s" \
                userid="USERID"/>
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="20" operation="stop" operation_key="grpStonith1_stop_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="6">
-+    <action_set>
-+      <rsc_op id="9" operation="start" operation_key="prmStonith1-2_start_0" \
                on_node="bl460g8n4" on_node_uuid="3232261400">
-+        <primitive id="prmStonith1-2" class="stonith" type="external/ipmi"/>
-+        <attributes CRM_meta_name="start" CRM_meta_on_fail="restart" \
CRM_meta_timeout="60000"  hostname="bl460g8n3" interface="lanplus" \
                ipaddr="192.168.28.43" passwd="****" pcmk_reboot_timeout="60s" \
                userid="USERID"/>
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="17" operation="stop" \
                operation_key="prmStonith1-2_stop_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="18" operation="start" \
                operation_key="grpStonith1_start_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="7">
-+    <action_set>
-+      <rsc_op id="1" operation="monitor" \
operation_key="prmStonith1-2_monitor_3600000" on_node="bl460g8n4" \
                on_node_uuid="3232261400">
-+        <primitive id="prmStonith1-2" class="stonith" type="external/ipmi"/>
-+        <attributes CRM_meta_interval="3600000" CRM_meta_name="monitor" \
CRM_meta_on_fail="restart" CRM_meta_timeout="60000"  hostname="bl460g8n3" \
interface="lanplus" ipaddr="192.168.28.43" passwd="****" pcmk_reboot_timeout="60s" \
                userid="USERID"/>
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="9" operation="start" operation_key="prmStonith1-2_start_0" \
                on_node="bl460g8n4" on_node_uuid="3232261400"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="8">
-+    <action_set>
-+      <pseudo_event id="26" operation="stopped" \
                operation_key="grpStonith2_stopped_0">
-+        <attributes CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="22" operation="stop" operation_key="prmStonith2-2_stop_0" \
                on_node="bl460g8n3" on_node_uuid="3232261399"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="25" operation="stop" operation_key="grpStonith2_stop_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="9">
-+    <action_set>
-+      <pseudo_event id="25" operation="stop" operation_key="grpStonith2_stop_0">
-+        <attributes CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs/>
-+  </synapse>
-+  <synapse id="10">
-+    <action_set>
-+      <pseudo_event id="24" operation="running" \
                operation_key="grpStonith2_running_0">
-+        <attributes CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="10" operation="start" operation_key="prmStonith2-2_start_0" \
                on_node="bl460g8n3" on_node_uuid="3232261399"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="23" operation="start" \
                operation_key="grpStonith2_start_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="11">
-+    <action_set>
-+      <pseudo_event id="23" operation="start" operation_key="grpStonith2_start_0">
-+        <attributes CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="26" operation="stopped" \
                operation_key="grpStonith2_stopped_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="12">
-+    <action_set>
-+      <rsc_op id="22" operation="stop" operation_key="prmStonith2-2_stop_0" \
                on_node="bl460g8n3" on_node_uuid="3232261399">
-+        <primitive id="prmStonith2-2" class="stonith" type="external/ipmi"/>
-+        <attributes CRM_meta_name="stop" CRM_meta_on_fail="ignore" \
CRM_meta_timeout="60000"  hostname="bl460g8n4" interface="lanplus" \
                ipaddr="192.168.28.44" passwd="****" pcmk_reboot_timeout="60s" \
                userid="USERID"/>
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="25" operation="stop" operation_key="grpStonith2_stop_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="13">
-+    <action_set>
-+      <rsc_op id="10" operation="start" operation_key="prmStonith2-2_start_0" \
                on_node="bl460g8n3" on_node_uuid="3232261399">
-+        <primitive id="prmStonith2-2" class="stonith" type="external/ipmi"/>
-+        <attributes CRM_meta_name="start" CRM_meta_on_fail="restart" \
CRM_meta_timeout="60000"  hostname="bl460g8n4" interface="lanplus" \
                ipaddr="192.168.28.44" passwd="****" pcmk_reboot_timeout="60s" \
                userid="USERID"/>
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="22" operation="stop" operation_key="prmStonith2-2_stop_0" \
                on_node="bl460g8n3" on_node_uuid="3232261399"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="23" operation="start" \
                operation_key="grpStonith2_start_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="14">
-+    <action_set>
-+      <rsc_op id="5" operation="monitor" \
operation_key="prmStonith2-2_monitor_3600000" on_node="bl460g8n3" \
                on_node_uuid="3232261399">
-+        <primitive id="prmStonith2-2" class="stonith" type="external/ipmi"/>
-+        <attributes CRM_meta_interval="3600000" CRM_meta_name="monitor" \
CRM_meta_on_fail="restart" CRM_meta_timeout="60000"  hostname="bl460g8n4" \
interface="lanplus" ipaddr="192.168.28.44" passwd="****" pcmk_reboot_timeout="60s" \
                userid="USERID"/>
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="10" operation="start" operation_key="prmStonith2-2_start_0" \
                on_node="bl460g8n3" on_node_uuid="3232261399"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="15">
-+    <action_set>
-+      <pseudo_event id="36" operation="stopped" \
                operation_key="master-group_stopped_0">
-+        <attributes CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="27" operation="stop" operation_key="vip-master_stop_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="30" operation="stop" operation_key="vip-rep_stop_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="35" operation="stop" \
                operation_key="master-group_stop_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="16">
-+    <action_set>
-+      <pseudo_event id="35" operation="stop" operation_key="master-group_stop_0">
-+        <attributes CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="4" operation="stop" operation_key="prmDB2_stop_0" \
                on_node="bl460g8n4" on_node_uuid="3232261400"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="67" operation="notified" \
                operation_key="msPostgresql_confirmed-post_notify_demoted_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="17">
-+    <action_set>
-+      <pseudo_event id="34" operation="running" \
                operation_key="master-group_running_0">
-+        <attributes CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="28" operation="start" operation_key="vip-master_start_0" \
                on_node="pgsr01" on_node_uuid="pgsr01" router_node="bl460g8n3"/>
-+      </trigger>
-+      <trigger>
-+        <rsc_op id="31" operation="start" operation_key="vip-rep_start_0" \
                on_node="pgsr01" on_node_uuid="pgsr01" router_node="bl460g8n3"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="33" operation="start" \
                operation_key="master-group_start_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="18">
-+    <action_set>
-+      <pseudo_event id="33" operation="start" operation_key="master-group_start_0">
-+        <attributes CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="36" operation="stopped" \
                operation_key="master-group_stopped_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="19">
-+    <action_set>
-+      <rsc_op id="29" operation="monitor" operation_key="vip-master_monitor_10000" \
                on_node="pgsr01" on_node_uuid="pgsr01" router_node="bl460g8n3">
-+        <primitive id="vip-master" class="ocf" provider="heartbeat" type="Dummy"/>
-+        <attributes CRM_meta_interval="10000" CRM_meta_name="monitor" \
                CRM_meta_on_fail="restart" CRM_meta_timeout="60000" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="28" operation="start" operation_key="vip-master_start_0" \
                on_node="pgsr01" on_node_uuid="pgsr01" router_node="bl460g8n3"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="20">
-+    <action_set>
-+      <rsc_op id="28" operation="start" operation_key="vip-master_start_0" \
                on_node="pgsr01" on_node_uuid="pgsr01" router_node="bl460g8n3">
-+        <primitive id="vip-master" class="ocf" provider="heartbeat" type="Dummy"/>
-+        <attributes CRM_meta_name="start" CRM_meta_on_fail="restart" \
                CRM_meta_timeout="60000" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="27" operation="stop" operation_key="vip-master_stop_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="33" operation="start" \
                operation_key="master-group_start_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="21">
-+    <action_set>
-+      <pseudo_event id="27" operation="stop" operation_key="vip-master_stop_0">
-+        <attributes CRM_meta_name="stop" CRM_meta_on_fail="fence" \
                CRM_meta_timeout="60000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="4" operation="stop" operation_key="prmDB2_stop_0" \
                on_node="bl460g8n4" on_node_uuid="3232261400"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="30" operation="stop" operation_key="vip-rep_stop_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="35" operation="stop" \
                operation_key="master-group_stop_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="22">
-+    <action_set>
-+      <rsc_op id="32" operation="monitor" operation_key="vip-rep_monitor_10000" \
                on_node="pgsr01" on_node_uuid="pgsr01" router_node="bl460g8n3">
-+        <primitive id="vip-rep" class="ocf" provider="heartbeat" type="Dummy"/>
-+        <attributes CRM_meta_interval="10000" CRM_meta_name="monitor" \
                CRM_meta_on_fail="restart" CRM_meta_timeout="60000" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="31" operation="start" operation_key="vip-rep_start_0" \
                on_node="pgsr01" on_node_uuid="pgsr01" router_node="bl460g8n3"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="23">
-+    <action_set>
-+      <rsc_op id="31" operation="start" operation_key="vip-rep_start_0" \
                on_node="pgsr01" on_node_uuid="pgsr01" router_node="bl460g8n3">
-+        <primitive id="vip-rep" class="ocf" provider="heartbeat" type="Dummy"/>
-+        <attributes CRM_meta_name="start" CRM_meta_on_fail="stop" \
                CRM_meta_timeout="60000" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="28" operation="start" operation_key="vip-master_start_0" \
                on_node="pgsr01" on_node_uuid="pgsr01" router_node="bl460g8n3"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="30" operation="stop" operation_key="vip-rep_stop_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="33" operation="start" \
                operation_key="master-group_start_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="24">
-+    <action_set>
-+      <pseudo_event id="30" operation="stop" operation_key="vip-rep_stop_0">
-+        <attributes CRM_meta_name="stop" CRM_meta_on_fail="ignore" \
                CRM_meta_timeout="60000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="4" operation="stop" operation_key="prmDB2_stop_0" \
                on_node="bl460g8n4" on_node_uuid="3232261400"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="35" operation="stop" \
                operation_key="master-group_stop_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="25" priority="1000000">
-+    <action_set>
-+      <pseudo_event id="73" operation="notified" operation_key="pgsql_notified_0" \
                internal_operation_key="pgsql:0_confirmed-post_notify_stop_0">
-+        <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" \
CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" \
CRM_meta_master_node_max="1" CRM_meta_name="notify" CRM_meta_notify="true" \
CRM_meta_notify_key_operation="stop" CRM_meta_notify_key_type="confirmed-post" \
CRM_meta_notify_operation="stop" CRM_meta_notify_type="post" CRM_meta_timeout="60000" \
                />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="72" operation="notify" \
operation_key="pgsql_post_notify_stop_0" \
                internal_operation_key="pgsql:0_post_notify_stop_0"/>
-+      </trigger>
-+      <trigger>
-+        <rsc_op id="74" operation="notify" operation_key="pgsql_post_notify_stop_0" \
internal_operation_key="pgsql:1_post_notify_stop_0" on_node="pgsr01" \
                on_node_uuid="pgsr01" router_node="bl460g8n3"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="26" priority="1000000">
-+    <action_set>
-+      <pseudo_event id="72" operation="notify" \
operation_key="pgsql_post_notify_stop_0" \
                internal_operation_key="pgsql:0_post_notify_stop_0">
-+        <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" \
CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" \
CRM_meta_master_node_max="1" CRM_meta_name="notify" CRM_meta_notify="true" \
CRM_meta_notify_key_operation="stop" CRM_meta_notify_key_type="post" \
CRM_meta_notify_operation="stop" CRM_meta_notify_type="post" CRM_meta_timeout="60000" \
                />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="4" operation="stop" operation_key="prmDB2_stop_0" \
                on_node="bl460g8n4" on_node_uuid="3232261400"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="27">
-+    <action_set>
-+      <pseudo_event id="38" operation="stop" operation_key="pgsql_stop_0" \
                internal_operation_key="pgsql:0_stop_0">
-+        <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" \
CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" \
CRM_meta_master_node_max="1" CRM_meta_name="stop" CRM_meta_notify="true" \
CRM_meta_notify_active_resource=" " CRM_meta_notify_active_uname=" " \
CRM_meta_notify_all_uname="bl460g8n3 bl460g8n4 pgsr01 pgsr02" \
CRM_meta_notify_available_uname="bl460g8n4 bl460g8n3 pgsr01 pgsr02" \
CRM_meta_notify_demote_resource="pgsql:0" CRM_meta_notify_demote_uname="pgsr02" \
CRM_meta_notify_inactive_resource=" " CRM_meta_notify_master_resource="pgsql:0 \
pgsql:1" CRM_meta_notify_master_uname="pgsr02 pgsr01" \
CRM_meta_notify_promote_resource=" " CRM_meta_notify_promote_uname=" " \
CRM_meta_notify_slave_resource=" " CRM_meta_notify_slave_uname=" " \
CRM_meta_notify_start_resource=" " CRM_meta_notify_start_uname=" " \
CRM_meta_notify_stop_resource="pgsql:0" CRM_meta_notify_stop_uname="pgsr02" \
                CRM_meta_on_fail="fence" CRM_meta_timeout="300000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="4" operation="stop" operation_key="prmDB2_stop_0" \
                on_node="bl460g8n4" on_node_uuid="3232261400"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="37" operation="demote" operation_key="pgsql_demote_0" \
                internal_operation_key="pgsql:0_demote_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="50" operation="stop" \
                operation_key="msPostgresql_stop_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="28">
-+    <action_set>
-+      <pseudo_event id="37" operation="demote" operation_key="pgsql_demote_0" \
                internal_operation_key="pgsql:0_demote_0">
-+        <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" \
CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" \
CRM_meta_master_node_max="1" CRM_meta_name="demote" CRM_meta_notify="true" \
CRM_meta_notify_active_resource=" " CRM_meta_notify_active_uname=" " \
CRM_meta_notify_all_uname="bl460g8n3 bl460g8n4 pgsr01 pgsr02" \
CRM_meta_notify_available_uname="bl460g8n4 bl460g8n3 pgsr01 pgsr02" \
CRM_meta_notify_demote_resource="pgsql:0" CRM_meta_notify_demote_uname="pgsr02" \
CRM_meta_notify_inactive_resource=" " CRM_meta_notify_master_resource="pgsql:0 \
pgsql:1" CRM_meta_notify_master_uname="pgsr02 pgsr01" \
CRM_meta_notify_promote_resource=" " CRM_meta_notify_promote_uname=" " \
CRM_meta_notify_slave_resource=" " CRM_meta_notify_slave_uname=" " \
CRM_meta_notify_start_resource=" " CRM_meta_notify_start_uname=" " \
CRM_meta_notify_stop_resource="pgsql:0" CRM_meta_notify_stop_uname="pgsr02" \
                CRM_meta_on_fail="fence" CRM_meta_timeout="300000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="4" operation="stop" operation_key="prmDB2_stop_0" \
                on_node="bl460g8n4" on_node_uuid="3232261400"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="62" operation="demote" \
                operation_key="msPostgresql_demote_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="29" priority="1000000">
-+    <action_set>
-+      <rsc_op id="84" operation="notify" operation_key="pgsql_post_notify_demote_0" \
internal_operation_key="pgsql:1_post_notify_demote_0" on_node="pgsr01" \
                on_node_uuid="pgsr01" router_node="bl460g8n3">
-+        <primitive id="pgsql" long-id="pgsql:1" class="ocf" provider="heartbeat" \
                type="Stateful"/>
-+        <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" \
CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" \
CRM_meta_master_node_max="1" CRM_meta_name="notify" CRM_meta_notify="true" \
CRM_meta_notify_active_resource=" " CRM_meta_notify_active_uname=" " \
CRM_meta_notify_all_uname="bl460g8n3 bl460g8n4 pgsr01 pgsr02" \
CRM_meta_notify_available_uname="bl460g8n4 bl460g8n3 pgsr01 pgsr02" \
CRM_meta_notify_demote_resource="pgsql:0" CRM_meta_notify_demote_uname="pgsr02" \
CRM_meta_notify_inactive_resource=" " CRM_meta_notify_key_operation="demoted" \
CRM_meta_notify_key_type="post" CRM_meta_notify_master_resource="pgsql:0 pgsql:1" \
CRM_meta_notify_master_uname="pgsr02 pgsr01" CRM_meta_notify_operation="demote" \
CRM_meta_notify_promote_resource=" " CRM_meta_notify_promote_uname=" " \
CRM_meta_notify_slave_resource=" " CRM_meta_notify_slave_uname=" " \
CRM_meta_notify_start_resource=" " CRM_meta_notify_start_uname=" " \
CRM_meta_notify_stop_resource="pgsql:0" CRM_meta_notify_stop_uname="pgsr02" \
                CRM_meta_notify_type="post" CRM_meta_timeout="60000" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="66" operation="notify" \
                operation_key="msPostgresql_post_notify_demoted_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="30">
-+    <action_set>
-+      <rsc_op id="83" operation="notify" operation_key="pgsql_pre_notify_demote_0" \
internal_operation_key="pgsql:1_pre_notify_demote_0" on_node="pgsr01" \
                on_node_uuid="pgsr01" router_node="bl460g8n3">
-+        <primitive id="pgsql" long-id="pgsql:1" class="ocf" provider="heartbeat" \
                type="Stateful"/>
-+        <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" \
CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" \
CRM_meta_master_node_max="1" CRM_meta_name="notify" CRM_meta_notify="true" \
CRM_meta_notify_active_resource=" " CRM_meta_notify_active_uname=" " \
CRM_meta_notify_all_uname="bl460g8n3 bl460g8n4 pgsr01 pgsr02" \
CRM_meta_notify_available_uname="bl460g8n4 bl460g8n3 pgsr01 pgsr02" \
CRM_meta_notify_demote_resource="pgsql:0" CRM_meta_notify_demote_uname="pgsr02" \
CRM_meta_notify_inactive_resource=" " CRM_meta_notify_key_operation="demote" \
CRM_meta_notify_key_type="pre" CRM_meta_notify_master_resource="pgsql:0 pgsql:1" \
CRM_meta_notify_master_uname="pgsr02 pgsr01" CRM_meta_notify_operation="demote" \
CRM_meta_notify_promote_resource=" " CRM_meta_notify_promote_uname=" " \
CRM_meta_notify_slave_resource=" " CRM_meta_notify_slave_uname=" " \
CRM_meta_notify_start_resource=" " CRM_meta_notify_start_uname=" " \
CRM_meta_notify_stop_resource="pgsql:0" CRM_meta_notify_stop_uname="pgsr02" \
                CRM_meta_notify_type="pre" CRM_meta_timeout="60000" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="64" operation="notify" \
                operation_key="msPostgresql_pre_notify_demote_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="31">
-+    <action_set>
-+      <rsc_op id="80" operation="notify" operation_key="pgsql_pre_notify_stop_0" \
internal_operation_key="pgsql:1_pre_notify_stop_0" on_node="pgsr01" \
                on_node_uuid="pgsr01" router_node="bl460g8n3">
-+        <primitive id="pgsql" long-id="pgsql:1" class="ocf" provider="heartbeat" \
                type="Stateful"/>
-+        <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" \
CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" \
CRM_meta_master_node_max="1" CRM_meta_name="notify" CRM_meta_notify="true" \
CRM_meta_notify_active_resource=" " CRM_meta_notify_active_uname=" " \
CRM_meta_notify_all_uname="bl460g8n3 bl460g8n4 pgsr01 pgsr02" \
CRM_meta_notify_available_uname="bl460g8n4 bl460g8n3 pgsr01 pgsr02" \
CRM_meta_notify_demote_resource="pgsql:0" CRM_meta_notify_demote_uname="pgsr02" \
CRM_meta_notify_inactive_resource=" " CRM_meta_notify_key_operation="stop" \
CRM_meta_notify_key_type="pre" CRM_meta_notify_master_resource="pgsql:0 pgsql:1" \
CRM_meta_notify_master_uname="pgsr02 pgsr01" CRM_meta_notify_operation="stop" \
CRM_meta_notify_promote_resource=" " CRM_meta_notify_promote_uname=" " \
CRM_meta_notify_slave_resource=" " CRM_meta_notify_slave_uname=" " \
CRM_meta_notify_start_resource=" " CRM_meta_notify_start_uname=" " \
CRM_meta_notify_stop_resource="pgsql:0" CRM_meta_notify_stop_uname="pgsr02" \
                CRM_meta_notify_type="pre" CRM_meta_timeout="60000" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="52" operation="notify" \
                operation_key="msPostgresql_pre_notify_stop_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="32" priority="1000000">
-+    <action_set>
-+      <rsc_op id="74" operation="notify" operation_key="pgsql_post_notify_stop_0" \
internal_operation_key="pgsql:1_post_notify_stop_0" on_node="pgsr01" \
                on_node_uuid="pgsr01" router_node="bl460g8n3">
-+        <primitive id="pgsql" long-id="pgsql:1" class="ocf" provider="heartbeat" \
                type="Stateful"/>
-+        <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" \
CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_master_max="1" \
CRM_meta_master_node_max="1" CRM_meta_name="notify" CRM_meta_notify="true" \
CRM_meta_notify_active_resource=" " CRM_meta_notify_active_uname=" " \
CRM_meta_notify_all_uname="bl460g8n3 bl460g8n4 pgsr01 pgsr02" \
CRM_meta_notify_available_uname="bl460g8n4 bl460g8n3 pgsr01 pgsr02" \
CRM_meta_notify_demote_resource="pgsql:0" CRM_meta_notify_demote_uname="pgsr02" \
CRM_meta_notify_inactive_resource=" " CRM_meta_notify_key_operation="stop" \
CRM_meta_notify_key_type="post" CRM_meta_notify_master_resource="pgsql:0 pgsql:1" \
CRM_meta_notify_master_uname="pgsr02 pgsr01" CRM_meta_notify_operation="stop" \
CRM_meta_notify_promote_resource=" " CRM_meta_notify_promote_uname=" " \
CRM_meta_notify_slave_resource=" " CRM_meta_notify_slave_uname=" " \
CRM_meta_notify_start_resource=" " CRM_meta_notify_start_uname=" " \
CRM_meta_notify_stop_resource="pgsql:0" CRM_meta_notify_stop_uname="pgsr02" \
                CRM_meta_notify_type="post" CRM_meta_timeout="60000" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="54" operation="notify" \
                operation_key="msPostgresql_post_notify_stopped_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="72" operation="notify" \
operation_key="pgsql_post_notify_stop_0" \
                internal_operation_key="pgsql:0_post_notify_stop_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="33">
-+    <action_set>
-+      <rsc_op id="43" operation="monitor" operation_key="pgsql_monitor_9000" \
internal_operation_key="pgsql:1_monitor_9000" on_node="pgsr01" on_node_uuid="pgsr01" \
                router_node="bl460g8n3">
-+        <primitive id="pgsql" long-id="pgsql:1" class="ocf" provider="heartbeat" \
                type="Stateful"/>
-+        <attributes CRM_meta_clone="1" CRM_meta_clone_max="2" \
CRM_meta_clone_node_max="1" CRM_meta_globally_unique="false" CRM_meta_interval="9000" \
CRM_meta_master_max="1" CRM_meta_master_node_max="1" CRM_meta_name="monitor" \
CRM_meta_notify="true" CRM_meta_on_fail="restart" CRM_meta_op_target_rc="8" \
                CRM_meta_role="Master" CRM_meta_timeout="60000" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="55" operation="notified" \
                operation_key="msPostgresql_confirmed-post_notify_stopped_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="67" operation="notified" \
                operation_key="msPostgresql_confirmed-post_notify_demoted_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="73" operation="notified" operation_key="pgsql_notified_0" \
                internal_operation_key="pgsql:0_confirmed-post_notify_stop_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="34" priority="1000000">
-+    <action_set>
-+      <pseudo_event id="67" operation="notified" \
                operation_key="msPostgresql_confirmed-post_notify_demoted_0">
-+        <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" \
CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" \
CRM_meta_notify="true" CRM_meta_notify_key_operation="demoted" \
CRM_meta_notify_key_type="confirmed-post" CRM_meta_notify_operation="demote" \
                CRM_meta_notify_type="post" CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="66" operation="notify" \
                operation_key="msPostgresql_post_notify_demoted_0"/>
-+      </trigger>
-+      <trigger>
-+        <rsc_op id="84" operation="notify" \
operation_key="pgsql_post_notify_demote_0" \
internal_operation_key="pgsql:1_post_notify_demote_0" on_node="pgsr01" \
                on_node_uuid="pgsr01" router_node="bl460g8n3"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="35" priority="1000000">
-+    <action_set>
-+      <pseudo_event id="66" operation="notify" \
                operation_key="msPostgresql_post_notify_demoted_0">
-+        <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" \
CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" \
CRM_meta_notify="true" CRM_meta_notify_key_operation="demoted" \
CRM_meta_notify_key_type="post" CRM_meta_notify_operation="demote" \
                CRM_meta_notify_type="post" CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="63" operation="demoted" \
                operation_key="msPostgresql_demoted_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="65" operation="notified" \
                operation_key="msPostgresql_confirmed-pre_notify_demote_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="36">
-+    <action_set>
-+      <pseudo_event id="65" operation="notified" \
                operation_key="msPostgresql_confirmed-pre_notify_demote_0">
-+        <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" \
CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" \
CRM_meta_notify="true" CRM_meta_notify_key_operation="demote" \
CRM_meta_notify_key_type="confirmed-pre" CRM_meta_notify_operation="demote" \
                CRM_meta_notify_type="pre" CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="64" operation="notify" \
                operation_key="msPostgresql_pre_notify_demote_0"/>
-+      </trigger>
-+      <trigger>
-+        <rsc_op id="83" operation="notify" \
operation_key="pgsql_pre_notify_demote_0" \
internal_operation_key="pgsql:1_pre_notify_demote_0" on_node="pgsr01" \
                on_node_uuid="pgsr01" router_node="bl460g8n3"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="37">
-+    <action_set>
-+      <pseudo_event id="64" operation="notify" \
                operation_key="msPostgresql_pre_notify_demote_0">
-+        <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" \
CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" \
CRM_meta_notify="true" CRM_meta_notify_key_operation="demote" \
CRM_meta_notify_key_type="pre" CRM_meta_notify_operation="demote" \
                CRM_meta_notify_type="pre" CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs/>
-+  </synapse>
-+  <synapse id="38" priority="1000000">
-+    <action_set>
-+      <pseudo_event id="63" operation="demoted" \
                operation_key="msPostgresql_demoted_0">
-+        <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" \
CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" \
                CRM_meta_notify="true" CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="37" operation="demote" operation_key="pgsql_demote_0" \
                internal_operation_key="pgsql:0_demote_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="62" operation="demote" \
                operation_key="msPostgresql_demote_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="39">
-+    <action_set>
-+      <pseudo_event id="62" operation="demote" \
                operation_key="msPostgresql_demote_0">
-+        <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" \
CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" \
                CRM_meta_notify="true" CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="65" operation="notified" \
                operation_key="msPostgresql_confirmed-pre_notify_demote_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="40" priority="1000000">
-+    <action_set>
-+      <pseudo_event id="55" operation="notified" \
                operation_key="msPostgresql_confirmed-post_notify_stopped_0">
-+        <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" \
CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" \
CRM_meta_notify="true" CRM_meta_notify_key_operation="stopped" \
CRM_meta_notify_key_type="confirmed-post" CRM_meta_notify_operation="stop" \
                CRM_meta_notify_type="post" CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="54" operation="notify" \
                operation_key="msPostgresql_post_notify_stopped_0"/>
-+      </trigger>
-+      <trigger>
-+        <rsc_op id="74" operation="notify" operation_key="pgsql_post_notify_stop_0" \
internal_operation_key="pgsql:1_post_notify_stop_0" on_node="pgsr01" \
                on_node_uuid="pgsr01" router_node="bl460g8n3"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="41" priority="1000000">
-+    <action_set>
-+      <pseudo_event id="54" operation="notify" \
                operation_key="msPostgresql_post_notify_stopped_0">
-+        <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" \
CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" \
CRM_meta_notify="true" CRM_meta_notify_key_operation="stopped" \
CRM_meta_notify_key_type="post" CRM_meta_notify_operation="stop" \
                CRM_meta_notify_type="post" CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="51" operation="stopped" \
                operation_key="msPostgresql_stopped_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="53" operation="notified" \
                operation_key="msPostgresql_confirmed-pre_notify_stop_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="42">
-+    <action_set>
-+      <pseudo_event id="53" operation="notified" \
                operation_key="msPostgresql_confirmed-pre_notify_stop_0">
-+        <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" \
CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" \
CRM_meta_notify="true" CRM_meta_notify_key_operation="stop" \
CRM_meta_notify_key_type="confirmed-pre" CRM_meta_notify_operation="stop" \
                CRM_meta_notify_type="pre" CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="52" operation="notify" \
                operation_key="msPostgresql_pre_notify_stop_0"/>
-+      </trigger>
-+      <trigger>
-+        <rsc_op id="80" operation="notify" operation_key="pgsql_pre_notify_stop_0" \
internal_operation_key="pgsql:1_pre_notify_stop_0" on_node="pgsr01" \
                on_node_uuid="pgsr01" router_node="bl460g8n3"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="43">
-+    <action_set>
-+      <pseudo_event id="52" operation="notify" \
                operation_key="msPostgresql_pre_notify_stop_0">
-+        <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" \
CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" \
CRM_meta_notify="true" CRM_meta_notify_key_operation="stop" \
CRM_meta_notify_key_type="pre" CRM_meta_notify_operation="stop" \
                CRM_meta_notify_type="pre" CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="67" operation="notified" \
                operation_key="msPostgresql_confirmed-post_notify_demoted_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="44" priority="1000000">
-+    <action_set>
-+      <pseudo_event id="51" operation="stopped" \
                operation_key="msPostgresql_stopped_0">
-+        <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" \
CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" \
                CRM_meta_notify="true" CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="38" operation="stop" operation_key="pgsql_stop_0" \
                internal_operation_key="pgsql:0_stop_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="50" operation="stop" \
                operation_key="msPostgresql_stop_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="45">
-+    <action_set>
-+      <pseudo_event id="50" operation="stop" operation_key="msPostgresql_stop_0">
-+        <attributes CRM_meta_clone_max="2" CRM_meta_clone_node_max="1" \
CRM_meta_globally_unique="false" CRM_meta_master_max="1" CRM_meta_master_node_max="1" \
                CRM_meta_notify="true" CRM_meta_timeout="20000" />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="4" operation="stop" operation_key="prmDB2_stop_0" \
                on_node="bl460g8n4" on_node_uuid="3232261400"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="53" operation="notified" \
                operation_key="msPostgresql_confirmed-pre_notify_stop_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="63" operation="demoted" \
                operation_key="msPostgresql_demoted_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="46">
-+    <action_set>
-+      <rsc_op id="70" operation="stop" operation_key="pgsr02_stop_0" \
                on_node="bl460g8n4" on_node_uuid="3232261400">
-+        <primitive id="pgsr02" class="ocf" provider="pacemaker" type="remote"/>
-+        <attributes CRM_meta_container="prmDB2" CRM_meta_timeout="20000" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs/>
-+  </synapse>
-+  <synapse id="47">
-+    <action_set>
-+      <pseudo_event id="8" operation="all_stopped" operation_key="all_stopped">
-+        <attributes />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="4" operation="stop" operation_key="prmDB2_stop_0" \
                on_node="bl460g8n4" on_node_uuid="3232261400"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="17" operation="stop" \
                operation_key="prmStonith1-2_stop_0"/>
-+      </trigger>
-+      <trigger>
-+        <rsc_op id="22" operation="stop" operation_key="prmStonith2-2_stop_0" \
                on_node="bl460g8n3" on_node_uuid="3232261399"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="27" operation="stop" operation_key="vip-master_stop_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="30" operation="stop" operation_key="vip-rep_stop_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="38" operation="stop" operation_key="pgsql_stop_0" \
                internal_operation_key="pgsql:0_stop_0"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="55" operation="notified" \
                operation_key="msPostgresql_confirmed-post_notify_stopped_0"/>
-+      </trigger>
-+      <trigger>
-+        <rsc_op id="70" operation="stop" operation_key="pgsr02_stop_0" \
                on_node="bl460g8n4" on_node_uuid="3232261400"/>
-+      </trigger>
-+      <trigger>
-+        <pseudo_event id="73" operation="notified" operation_key="pgsql_notified_0" \
                internal_operation_key="pgsql:0_confirmed-post_notify_stop_0"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+</transition_graph>
-diff --git a/pengine/test10/bug-cl-5247.scores b/pengine/test10/bug-cl-5247.scores
-new file mode 100644
-index 0000000..e9e4709
---- /dev/null
-+++ b/pengine/test10/bug-cl-5247.scores
-@@ -0,0 +1,84 @@
-+Allocation scores:
-+Using the original execution date of: 2015-08-12 02:53:40Z
-+clone_color: msPostgresql allocation score on bl460g8n3: -INFINITY
-+clone_color: msPostgresql allocation score on bl460g8n4: -INFINITY
-+clone_color: msPostgresql allocation score on pgsr01: 0
-+clone_color: msPostgresql allocation score on pgsr02: 0
-+clone_color: pgsql:0 allocation score on bl460g8n3: -INFINITY
-+clone_color: pgsql:0 allocation score on bl460g8n4: -INFINITY
-+clone_color: pgsql:0 allocation score on pgsr01: 0
-+clone_color: pgsql:0 allocation score on pgsr02: INFINITY
-+clone_color: pgsql:1 allocation score on bl460g8n3: -INFINITY
-+clone_color: pgsql:1 allocation score on bl460g8n4: -INFINITY
-+clone_color: pgsql:1 allocation score on pgsr01: INFINITY
-+clone_color: pgsql:1 allocation score on pgsr02: 0
-+group_color: grpStonith1 allocation score on bl460g8n3: -INFINITY
-+group_color: grpStonith1 allocation score on bl460g8n4: 0
-+group_color: grpStonith1 allocation score on pgsr01: -INFINITY
-+group_color: grpStonith1 allocation score on pgsr02: -INFINITY
-+group_color: grpStonith2 allocation score on bl460g8n3: 0
-+group_color: grpStonith2 allocation score on bl460g8n4: -INFINITY
-+group_color: grpStonith2 allocation score on pgsr01: -INFINITY
-+group_color: grpStonith2 allocation score on pgsr02: -INFINITY
-+group_color: master-group allocation score on bl460g8n3: 0
-+group_color: master-group allocation score on bl460g8n4: 0
-+group_color: master-group allocation score on pgsr01: 0
-+group_color: master-group allocation score on pgsr02: 0
-+group_color: prmStonith1-2 allocation score on bl460g8n3: -INFINITY
-+group_color: prmStonith1-2 allocation score on bl460g8n4: INFINITY
-+group_color: prmStonith1-2 allocation score on pgsr01: -INFINITY
-+group_color: prmStonith1-2 allocation score on pgsr02: -INFINITY
-+group_color: prmStonith2-2 allocation score on bl460g8n3: INFINITY
-+group_color: prmStonith2-2 allocation score on bl460g8n4: -INFINITY
-+group_color: prmStonith2-2 allocation score on pgsr01: -INFINITY
-+group_color: prmStonith2-2 allocation score on pgsr02: -INFINITY
-+group_color: vip-master allocation score on bl460g8n3: 0
-+group_color: vip-master allocation score on bl460g8n4: 0
-+group_color: vip-master allocation score on pgsr01: 0
-+group_color: vip-master allocation score on pgsr02: INFINITY
-+group_color: vip-rep allocation score on bl460g8n3: 0
-+group_color: vip-rep allocation score on bl460g8n4: 0
-+group_color: vip-rep allocation score on pgsr01: 0
-+group_color: vip-rep allocation score on pgsr02: INFINITY
-+native_color: pgsql:0 allocation score on bl460g8n3: -INFINITY
-+native_color: pgsql:0 allocation score on bl460g8n4: -INFINITY
-+native_color: pgsql:0 allocation score on pgsr01: -INFINITY
-+native_color: pgsql:0 allocation score on pgsr02: -INFINITY
-+native_color: pgsql:1 allocation score on bl460g8n3: -INFINITY
-+native_color: pgsql:1 allocation score on bl460g8n4: -INFINITY
-+native_color: pgsql:1 allocation score on pgsr01: INFINITY
-+native_color: pgsql:1 allocation score on pgsr02: -INFINITY
-+native_color: pgsr01 allocation score on bl460g8n3: INFINITY
-+native_color: pgsr01 allocation score on bl460g8n4: -INFINITY
-+native_color: pgsr01 allocation score on pgsr01: -INFINITY
-+native_color: pgsr01 allocation score on pgsr02: -INFINITY
-+native_color: pgsr02 allocation score on bl460g8n3: -INFINITY
-+native_color: pgsr02 allocation score on bl460g8n4: -INFINITY
-+native_color: pgsr02 allocation score on pgsr01: -INFINITY
-+native_color: pgsr02 allocation score on pgsr02: -INFINITY
-+native_color: prmDB1 allocation score on bl460g8n3: INFINITY
-+native_color: prmDB1 allocation score on bl460g8n4: -INFINITY
-+native_color: prmDB1 allocation score on pgsr01: -INFINITY
-+native_color: prmDB1 allocation score on pgsr02: -INFINITY
-+native_color: prmDB2 allocation score on bl460g8n3: -INFINITY
-+native_color: prmDB2 allocation score on bl460g8n4: -INFINITY
-+native_color: prmDB2 allocation score on pgsr01: -INFINITY
-+native_color: prmDB2 allocation score on pgsr02: -INFINITY
-+native_color: prmStonith1-2 allocation score on bl460g8n3: -INFINITY
-+native_color: prmStonith1-2 allocation score on bl460g8n4: INFINITY
-+native_color: prmStonith1-2 allocation score on pgsr01: -INFINITY
-+native_color: prmStonith1-2 allocation score on pgsr02: -INFINITY
-+native_color: prmStonith2-2 allocation score on bl460g8n3: INFINITY
-+native_color: prmStonith2-2 allocation score on bl460g8n4: -INFINITY
-+native_color: prmStonith2-2 allocation score on pgsr01: -INFINITY
-+native_color: prmStonith2-2 allocation score on pgsr02: -INFINITY
-+native_color: vip-master allocation score on bl460g8n3: -INFINITY
-+native_color: vip-master allocation score on bl460g8n4: -INFINITY
-+native_color: vip-master allocation score on pgsr01: INFINITY
-+native_color: vip-master allocation score on pgsr02: -INFINITY
-+native_color: vip-rep allocation score on bl460g8n3: -INFINITY
-+native_color: vip-rep allocation score on bl460g8n4: -INFINITY
-+native_color: vip-rep allocation score on pgsr01: 0
-+native_color: vip-rep allocation score on pgsr02: -INFINITY
-+pgsql:0 promotion score on none: 0
-+pgsql:1 promotion score on pgsr01: 10
-diff --git a/pengine/test10/bug-cl-5247.summary b/pengine/test10/bug-cl-5247.summary
-new file mode 100644
-index 0000000..5564286
---- /dev/null
-+++ b/pengine/test10/bug-cl-5247.summary
-@@ -0,0 +1,96 @@
-+Using the original execution date of: 2015-08-12 02:53:40Z
-+
-+Current cluster status:
-+Online: [ bl460g8n3 bl460g8n4 ]
-+Containers: [ pgsr01:prmDB1 ]
-+
-+ prmDB1	(ocf::heartbeat:VirtualDomain):	Started bl460g8n3
-+ prmDB2	(ocf::heartbeat:VirtualDomain):	FAILED bl460g8n4
-+ Resource Group: grpStonith1
-+     prmStonith1-2	(stonith:external/ipmi):	Started bl460g8n4
-+ Resource Group: grpStonith2
-+     prmStonith2-2	(stonith:external/ipmi):	Started bl460g8n3
-+ Resource Group: master-group
-+     vip-master	(ocf::heartbeat:Dummy):	FAILED pgsr02
-+     vip-rep	(ocf::heartbeat:Dummy):	FAILED pgsr02
-+ Master/Slave Set: msPostgresql [pgsql]
-+     Masters: [ pgsr01 ]
-+     Stopped: [ bl460g8n3 bl460g8n4 ]
-+
-+Transition Summary:
-+ * Stop    prmDB2	(bl460g8n4)
-+ * Restart prmStonith1-2	(Started bl460g8n4)
-+ * Restart prmStonith2-2	(Started bl460g8n3)
-+ * Recover vip-master	(Started pgsr02 -> pgsr01)
-+ * Recover vip-rep	(Started pgsr02 -> pgsr01)
-+ * Demote  pgsql:0	(Master -> Stopped pgsr02)
-+ * Stop    pgsr02	(bl460g8n4)
-+
-+Executing cluster transition:
-+ * Pseudo action:   grpStonith1_stop_0
-+ * Pseudo action:   prmStonith1-2_stop_0
-+ * Pseudo action:   grpStonith2_stop_0
-+ * Resource action: prmStonith2-2   stop on bl460g8n3
-+ * Pseudo action:   msPostgresql_pre_notify_demote_0
-+ * Resource action: pgsr02          stop on bl460g8n4
-+ * Resource action: prmDB2          stop on bl460g8n4
-+ * Pseudo action:   grpStonith1_stopped_0
-+ * Pseudo action:   grpStonith1_start_0
-+ * Resource action: prmStonith1-2   start on bl460g8n4
-+ * Resource action: prmStonith1-2   monitor600000 on bl460g8n4
-+ * Pseudo action:   grpStonith2_stopped_0
-+ * Pseudo action:   grpStonith2_start_0
-+ * Resource action: prmStonith2-2   start on bl460g8n3
-+ * Resource action: prmStonith2-2   monitor600000 on bl460g8n3
-+ * Pseudo action:   pgsql_post_notify_stop_0
-+ * Resource action: pgsql           notify on pgsr01
-+ * Pseudo action:   msPostgresql_confirmed-pre_notify_demote_0
-+ * Pseudo action:   msPostgresql_demote_0
-+ * Pseudo action:   grpStonith1_running_0
-+ * Pseudo action:   grpStonith2_running_0
-+ * Pseudo action:   pgsql_demote_0
-+ * Pseudo action:   msPostgresql_demoted_0
-+ * Pseudo action:   msPostgresql_post_notify_demoted_0
-+ * Resource action: pgsql           notify on pgsr01
-+ * Pseudo action:   msPostgresql_confirmed-post_notify_demoted_0
-+ * Pseudo action:   msPostgresql_pre_notify_stop_0
-+ * Pseudo action:   master-group_stop_0
-+ * Pseudo action:   vip-rep_stop_0
-+ * Resource action: pgsql           notify on pgsr01
-+ * Pseudo action:   msPostgresql_confirmed-pre_notify_stop_0
-+ * Pseudo action:   msPostgresql_stop_0
-+ * Pseudo action:   vip-master_stop_0
-+ * Pseudo action:   pgsql_stop_0
-+ * Pseudo action:   msPostgresql_stopped_0
-+ * Pseudo action:   master-group_stopped_0
-+ * Pseudo action:   master-group_start_0
-+ * Resource action: vip-master      start on pgsr01
-+ * Resource action: vip-rep         start on pgsr01
-+ * Pseudo action:   msPostgresql_post_notify_stopped_0
-+ * Pseudo action:   master-group_running_0
-+ * Resource action: vip-master      monitor000 on pgsr01
-+ * Resource action: vip-rep         monitor000 on pgsr01
-+ * Resource action: pgsql           notify on pgsr01
-+ * Pseudo action:   msPostgresql_confirmed-post_notify_stopped_0
-+ * Pseudo action:   pgsql_notified_0
-+ * Resource action: pgsql           monitor00 on pgsr01
-+ * Pseudo action:   all_stopped
-+Using the original execution date of: 2015-08-12 02:53:40Z
-+
-+Revised cluster status:
-+Online: [ bl460g8n3 bl460g8n4 ]
-+Containers: [ pgsr01:prmDB1 ]
-+
-+ prmDB1	(ocf::heartbeat:VirtualDomain):	Started bl460g8n3
-+ prmDB2	(ocf::heartbeat:VirtualDomain):	FAILED
-+ Resource Group: grpStonith1
-+     prmStonith1-2	(stonith:external/ipmi):	Started bl460g8n4
-+ Resource Group: grpStonith2
-+     prmStonith2-2	(stonith:external/ipmi):	Started bl460g8n3
-+ Resource Group: master-group
-+     vip-master	(ocf::heartbeat:Dummy):	FAILED[ pgsr02 pgsr01 ]
-+     vip-rep	(ocf::heartbeat:Dummy):	FAILED[ pgsr02 pgsr01 ]
-+ Master/Slave Set: msPostgresql [pgsql]
-+     Masters: [ pgsr01 ]
-+     Stopped: [ bl460g8n3 bl460g8n4 ]
-+
-diff --git a/pengine/test10/bug-cl-5247.xml b/pengine/test10/bug-cl-5247.xml
-new file mode 100644
-index 0000000..c36ef40
---- /dev/null
-+++ b/pengine/test10/bug-cl-5247.xml
-@@ -0,0 +1,295 @@
-+<cib crm_feature_set="3.0.10" validate-with="pacemaker-2.3" epoch="8" \
num_updates="33" admin_epoch="0" cib-last-written="Wed Aug 12 11:51:47 2015" \
update-origin="bl460g8n4" update-client="crm_resource" update-user="root" \
                have-quorum="1" dc-uuid="3232261399" execution-date="1439348020">
-+  <configuration>
-+    <crm_config>
-+      <cluster_property_set id="cib-bootstrap-options">
-+        <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" \
                value="false"/>
-+        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" \
                value="1.1.13-ad1f397"/>
-+        <nvpair id="cib-bootstrap-options-cluster-infrastructure" \
                name="cluster-infrastructure" value="corosync"/>
-+        <nvpair name="no-quorum-policy" value="ignore" \
                id="cib-bootstrap-options-no-quorum-policy"/>
-+        <nvpair name="stonith-enabled" value="true" \
                id="cib-bootstrap-options-stonith-enabled"/>
-+        <nvpair name="startup-fencing" value="false" \
                id="cib-bootstrap-options-startup-fencing"/>
-+      </cluster_property_set>
-+    </crm_config>
-+    <nodes>
-+      <node id="3232261399" uname="bl460g8n3"/>
-+      <node id="3232261400" uname="bl460g8n4"/>
-+    </nodes>
-+    <resources>
-+      <primitive id="prmDB1" class="ocf" provider="heartbeat" type="VirtualDomain">
-+        <meta_attributes id="prmDB1-meta_attributes">
-+          <nvpair name="remote-node" value="pgsr01" \
                id="prmDB1-meta_attributes-remote-node"/>
-+        </meta_attributes>
-+        <instance_attributes id="prmDB1-instance_attributes">
-+          <nvpair name="config" value="/etc/libvirt/qemu/pgsr01.xml" \
                id="prmDB1-instance_attributes-config"/>
-+          <nvpair name="hypervisor" value="qemu:///system" \
                id="prmDB1-instance_attributes-hypervisor"/>
-+          <nvpair name="migration_transport" value="ssh" \
                id="prmDB1-instance_attributes-migration_transport"/>
-+        </instance_attributes>
-+        <operations>
-+          <op name="start" interval="0" timeout="120" on-fail="restart" \
                id="prmDB1-start-0"/>
-+          <op name="monitor" interval="10" timeout="30" on-fail="restart" \
                id="prmDB1-monitor-10"/>
-+          <op name="stop" interval="0" timeout="120" on-fail="fence" \
                id="prmDB1-stop-0"/>
-+        </operations>
-+        <utilization id="prmDB1-utilization">
-+          <nvpair id="prmDB1-utilization-cpu" name="cpu" value="2"/>
-+          <nvpair id="prmDB1-utilization-hv_memory" name="hv_memory" value="4024"/>
-+        </utilization>
-+      </primitive>
-+      <primitive id="prmDB2" class="ocf" provider="heartbeat" type="VirtualDomain">
-+        <meta_attributes id="prmDB2-meta_attributes">
-+          <nvpair name="remote-node" value="pgsr02" \
                id="prmDB2-meta_attributes-remote-node"/>
-+        </meta_attributes>
-+        <instance_attributes id="prmDB2-instance_attributes">
-+          <nvpair name="config" value="/etc/libvirt/qemu/pgsr02.xml" \
                id="prmDB2-instance_attributes-config"/>
-+          <nvpair name="hypervisor" value="qemu:///system" \
                id="prmDB2-instance_attributes-hypervisor"/>
-+          <nvpair name="migration_transport" value="ssh" \
                id="prmDB2-instance_attributes-migration_transport"/>
-+        </instance_attributes>
-+        <operations>
-+          <op name="start" interval="0" timeout="120" on-fail="restart" \
                id="prmDB2-start-0"/>
-+          <op name="monitor" interval="10" timeout="30" on-fail="restart" \
                id="prmDB2-monitor-10"/>
-+          <op name="stop" interval="0" timeout="120" on-fail="fence" \
                id="prmDB2-stop-0"/>
-+        </operations>
-+        <utilization id="prmDB2-utilization">
-+          <nvpair id="prmDB2-utilization-cpu" name="cpu" value="2"/>
-+          <nvpair id="prmDB2-utilization-hv_memory" name="hv_memory" value="4024"/>
-+        </utilization>
-+      </primitive>
-+      <group id="grpStonith1">
-+        <primitive id="prmStonith1-2" class="stonith" type="external/ipmi">
-+          <instance_attributes id="prmStonith1-2-instance_attributes">
-+            <nvpair name="pcmk_reboot_timeout" value="60s" \
                id="prmStonith1-2-instance_attributes-pcmk_reboot_timeout"/>
-+            <nvpair name="hostname" value="bl460g8n3" \
                id="prmStonith1-2-instance_attributes-hostname"/>
-+            <nvpair name="ipaddr" value="192.168.28.43" \
                id="prmStonith1-2-instance_attributes-ipaddr"/>
-+            <nvpair name="userid" value="USERID" \
                id="prmStonith1-2-instance_attributes-userid"/>
-+            <nvpair name="passwd" value="****" \
                id="prmStonith1-2-instance_attributes-passwd"/>
-+            <nvpair name="interface" value="lanplus" \
                id="prmStonith1-2-instance_attributes-interface"/>
-+          </instance_attributes>
-+          <operations>
-+            <op name="start" interval="0s" timeout="60s" on-fail="restart" \
                id="prmStonith1-2-start-0s"/>
-+            <op name="monitor" interval="3600s" timeout="60s" on-fail="restart" \
                id="prmStonith1-2-monitor-3600s"/>
-+            <op name="stop" interval="0s" timeout="60s" on-fail="ignore" \
                id="prmStonith1-2-stop-0s"/>
-+          </operations>
-+        </primitive>
-+      </group>
-+      <group id="grpStonith2">
-+        <primitive id="prmStonith2-2" class="stonith" type="external/ipmi">
-+          <instance_attributes id="prmStonith2-2-instance_attributes">
-+            <nvpair name="pcmk_reboot_timeout" value="60s" \
                id="prmStonith2-2-instance_attributes-pcmk_reboot_timeout"/>
-+            <nvpair name="hostname" value="bl460g8n4" \
                id="prmStonith2-2-instance_attributes-hostname"/>
-+            <nvpair name="ipaddr" value="192.168.28.44" \
                id="prmStonith2-2-instance_attributes-ipaddr"/>
-+            <nvpair name="userid" value="USERID" \
                id="prmStonith2-2-instance_attributes-userid"/>
-+            <nvpair name="passwd" value="****" \
                id="prmStonith2-2-instance_attributes-passwd"/>
-+            <nvpair name="interface" value="lanplus" \
                id="prmStonith2-2-instance_attributes-interface"/>
-+          </instance_attributes>
-+          <operations>
-+            <op name="start" interval="0s" timeout="60s" on-fail="restart" \
                id="prmStonith2-2-start-0s"/>
-+            <op name="monitor" interval="3600s" timeout="60s" on-fail="restart" \
                id="prmStonith2-2-monitor-3600s"/>
-+            <op name="stop" interval="0s" timeout="60s" on-fail="ignore" \
                id="prmStonith2-2-stop-0s"/>
-+          </operations>
-+        </primitive>
-+      </group>
-+      <group id="master-group">
-+        <primitive id="vip-master" class="ocf" provider="heartbeat" type="Dummy">
-+          <operations>
-+            <op name="start" interval="0s" timeout="60s" on-fail="restart" \
                id="vip-master-start-0s"/>
-+            <op name="monitor" interval="10s" timeout="60s" on-fail="restart" \
                id="vip-master-monitor-10s"/>
-+            <op name="stop" interval="0s" timeout="60s" on-fail="fence" \
                id="vip-master-stop-0s"/>
-+          </operations>
-+        </primitive>
-+        <primitive id="vip-rep" class="ocf" provider="heartbeat" type="Dummy">
-+          <operations>
-+            <op name="start" interval="0s" timeout="60s" on-fail="stop" \
                id="vip-rep-start-0s"/>
-+            <op name="monitor" interval="10s" timeout="60s" on-fail="restart" \
                id="vip-rep-monitor-10s"/>
-+            <op name="stop" interval="0s" timeout="60s" on-fail="ignore" \
                id="vip-rep-stop-0s"/>
-+          </operations>
-+        </primitive>
-+      </group>
-+      <master id="msPostgresql">
-+        <meta_attributes id="msPostgresql-meta_attributes">
-+          <nvpair name="master-max" value="1" \
                id="msPostgresql-meta_attributes-master-max"/>
-+          <nvpair name="master-node-max" value="1" \
                id="msPostgresql-meta_attributes-master-node-max"/>
-+          <nvpair name="clone-max" value="2" \
                id="msPostgresql-meta_attributes-clone-max"/>
-+          <nvpair name="clone-node-max" value="1" \
                id="msPostgresql-meta_attributes-clone-node-max"/>
-+          <nvpair name="notify" value="true" \
                id="msPostgresql-meta_attributes-notify"/>
-+        </meta_attributes>
-+        <primitive id="pgsql" class="ocf" provider="heartbeat" type="Stateful">
-+          <operations>
-+            <op name="start" interval="0s" timeout="300s" on-fail="restart" \
                id="pgsql-start-0s"/>
-+            <op name="monitor" interval="10s" timeout="60s" on-fail="restart" \
                id="pgsql-monitor-10s"/>
-+            <op name="monitor" role="Master" interval="9s" timeout="60s" \
                on-fail="restart" id="pgsql-monitor-9s"/>
-+            <op name="promote" interval="0s" timeout="300s" on-fail="restart" \
                id="pgsql-promote-0s"/>
-+            <op name="demote" interval="0s" timeout="300s" on-fail="fence" \
                id="pgsql-demote-0s"/>
-+            <op name="notify" interval="0s" timeout="60s" id="pgsql-notify-0s"/>
-+            <op name="stop" interval="0s" timeout="300s" on-fail="fence" \
                id="pgsql-stop-0s"/>
-+          </operations>
-+        </primitive>
-+      </master>
-+    </resources>
-+    <constraints>
-+      <rsc_location id="rsc_location-grpStonith1-4" rsc="grpStonith1">
-+        <rule score="-INFINITY" boolean-op="or" \
                id="rsc_location-grpStonith1-4-rule">
-+          <expression attribute="#uname" operation="eq" value="bl460g8n3" \
                id="rsc_location-grpStonith1-4-rule-expression"/>
-+          <expression attribute="#uname" operation="eq" value="pgsr01" \
                id="rsc_location-grpStonith1-4-rule-expression-0"/>
-+          <expression attribute="#uname" operation="eq" value="pgsr02" \
                id="rsc_location-grpStonith1-4-rule-expression-1"/>
-+        </rule>
-+      </rsc_location>
-+      <rsc_location id="rsc_location-grpStonith2-5" rsc="grpStonith2">
-+        <rule score="-INFINITY" boolean-op="or" \
                id="rsc_location-grpStonith2-5-rule">
-+          <expression attribute="#uname" operation="eq" value="bl460g8n4" \
                id="rsc_location-grpStonith2-5-rule-expression"/>
-+          <expression attribute="#uname" operation="eq" value="pgsr01" \
                id="rsc_location-grpStonith2-5-rule-expression-0"/>
-+          <expression attribute="#uname" operation="eq" value="pgsr02" \
                id="rsc_location-grpStonith2-5-rule-expression-1"/>
-+        </rule>
-+      </rsc_location>
-+      <rsc_location id="rsc_location-msPostgresql-1" rsc="msPostgresql">
-+        <rule score="-INFINITY" id="rsc_location-msPostgresql-1-rule">
-+          <expression attribute="#uname" operation="eq" value="bl460g8n3" \
                id="rsc_location-msPostgresql-1-rule-expression"/>
-+        </rule>
-+        <rule score="-INFINITY" id="rsc_location-msPostgresql-1-rule-0">
-+          <expression attribute="#uname" operation="eq" value="bl460g8n4" \
                id="rsc_location-msPostgresql-1-rule-0-expression"/>
-+        </rule>
-+      </rsc_location>
-+      <rsc_location id="rsc_location-prmDB1-2" rsc="prmDB1">
-+        <rule score="-INFINITY" id="rsc_location-prmDB1-2-rule">
-+          <expression attribute="#uname" operation="eq" value="bl460g8n4" \
                id="rsc_location-prmDB1-2-rule-expression"/>
-+        </rule>
-+      </rsc_location>
-+      <rsc_location id="rsc_location-prmDB2-3" rsc="prmDB2">
-+        <rule score="-INFINITY" id="rsc_location-prmDB2-3-rule">
-+          <expression attribute="#uname" operation="eq" value="bl460g8n3" \
                id="rsc_location-prmDB2-3-rule-expression"/>
-+        </rule>
-+      </rsc_location>
-+      <rsc_colocation id="rsc_colocation-master-group-msPostgresql-1" \
                score="INFINITY" rsc="master-group" with-rsc="msPostgresql" \
                with-rsc-role="Master"/>
-+      <rsc_order id="rsc_order-msPostgresql-master-group-1" score="INFINITY" \
symmetrical="false" first="msPostgresql" first-action="promote" then="master-group" \
                then-action="start"/>
-+      <rsc_order id="rsc_order-msPostgresql-master-group-2" score="0" \
symmetrical="false" first="msPostgresql" first-action="demote" then="master-group" \
                then-action="stop"/>
-+    </constraints>
-+    <fencing-topology>
-+      <fencing-level devices="prmStonith1-2" index="1" target="bl460g8n3" \
                id="fencing"/>
-+      <fencing-level devices="prmStonith2-2" index="1" target="bl460g8n4" \
                id="fencing-0"/>
-+    </fencing-topology>
-+    <rsc_defaults>
-+      <meta_attributes id="rsc-options">
-+        <nvpair name="resource-stickiness" value="INFINITY" \
                id="rsc-options-resource-stickiness"/>
-+        <nvpair name="migration-threshold" value="1" \
                id="rsc-options-migration-threshold"/>
-+      </meta_attributes>
-+    </rsc_defaults>
-+  </configuration>
-+  <status>
-+    <node_state id="3232261400" uname="bl460g8n4" in_ccm="true" crmd="online" \
                crm-debug-origin="do_update_resource" join="member" \
                expected="member">
-+      <lrm id="3232261400">
-+        <lrm_resources>
-+          <lrm_resource id="prmStonith1-2" type="external/ipmi" class="stonith">
-+            <lrm_rsc_op id="prmStonith1-2_last_0" \
operation_key="prmStonith1-2_start_0" operation="start" \
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="24:3:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;24:3:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n4" call-id="31" rc-code="0" op-status="0" interval="0" \
last-run="1439347907" last-rc-change="1439347907" exec-time="1226" queue-time="0" \
op-digest="afe12b5cfc17d93f044b4e7a9cdbcf9b" op-secure-params=" passwd " \
                op-secure-digest="e865267179ef110b6279ff90ad06de48"/>
-+            <lrm_rsc_op id="prmStonith1-2_monitor_3600000" \
operation_key="prmStonith1-2_monitor_3600000" operation="monitor" \
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="12:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;12:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n4" call-id="33" rc-code="0" op-status="0" interval="3600000" \
last-rc-change="1439347908" exec-time="1131" queue-time="0" \
op-digest="e8ae3d1e396335d3601757dd89d0ae69" op-secure-params=" passwd " \
                op-secure-digest="e865267179ef110b6279ff90ad06de48"/>
-+          </lrm_resource>
-+          <lrm_resource id="prmStonith2-2" type="external/ipmi" class="stonith">
-+            <lrm_rsc_op id="prmStonith2-2_last_0" \
operation_key="prmStonith2-2_monitor_0" operation="monitor" \
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="16:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:7;16:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n4" call-id="17" rc-code="7" op-status="0" interval="0" \
last-run="1439347907" last-rc-change="1439347907" exec-time="0" queue-time="0" \
op-digest="92d167320c2a18fe8f3ea285d9f9ee90" op-secure-params=" passwd " \
                op-secure-digest="1149ea95c2c0b76e99fa8fa192158cbf"/>
-+          </lrm_resource>
-+          <lrm_resource id="vip-master" type="Dummy" class="ocf" \
                provider="heartbeat">
-+            <lrm_rsc_op id="vip-master_last_0" operation_key="vip-master_monitor_0" \
operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="17:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:7;17:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n4" call-id="21" rc-code="7" op-status="0" interval="0" \
last-run="1439347907" last-rc-change="1439347907" exec-time="20" queue-time="0" \
op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" op-force-restart=" state " \
                op-restart-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
-+          </lrm_resource>
-+          <lrm_resource id="vip-rep" type="Dummy" class="ocf" provider="heartbeat">
-+            <lrm_rsc_op id="vip-rep_last_0" operation_key="vip-rep_monitor_0" \
operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="18:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:7;18:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n4" call-id="25" rc-code="7" op-status="0" interval="0" \
last-run="1439347907" last-rc-change="1439347907" exec-time="11" queue-time="0" \
op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" op-force-restart=" state " \
                op-restart-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
-+          </lrm_resource>
-+          <lrm_resource id="pgsql" type="Stateful" class="ocf" \
                provider="heartbeat">
-+            <lrm_rsc_op id="pgsql_last_0" operation_key="pgsql_monitor_0" \
operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="19:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:7;19:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n4" call-id="30" rc-code="7" op-status="0" interval="0" \
last-run="1439347907" last-rc-change="1439347907" exec-time="17" queue-time="0" \
                op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
-+          </lrm_resource>
-+          <lrm_resource id="prmDB1" type="VirtualDomain" class="ocf" \
                provider="heartbeat">
-+            <lrm_rsc_op id="prmDB1_last_0" operation_key="prmDB1_monitor_0" \
operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="13:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:7;13:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n4" call-id="5" rc-code="7" op-status="0" interval="0" \
last-run="1439347907" last-rc-change="1439347907" exec-time="228" queue-time="0" \
                op-digest="04e86e87e00cd62d2fde3a8ec03d5bc1"/>
-+          </lrm_resource>
-+          <lrm_resource id="prmDB2" type="VirtualDomain" class="ocf" \
                provider="heartbeat">
-+            <lrm_rsc_op id="prmDB2_last_0" operation_key="prmDB2_start_0" \
operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="8:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;8:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" on_node="bl460g8n4" \
call-id="32" rc-code="0" op-status="0" interval="0" last-run="1439347908" \
last-rc-change="1439347908" exec-time="891" queue-time="0" \
                op-digest="4b7b16f20229da9f7b54b8898eb3de9a"/>
-+            <lrm_rsc_op id="prmDB2_monitor_10000" \
operation_key="prmDB2_monitor_10000" operation="monitor" \
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="9:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;9:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" on_node="bl460g8n4" \
call-id="34" rc-code="0" op-status="0" interval="10000" last-rc-change="1439347909" \
                exec-time="102" queue-time="0" \
                op-digest="9d078646f59ffe39c041072ed10692be"/>
-+          </lrm_resource>
-+          <lrm_resource id="pgsr02" type="remote" class="ocf" provider="pacemaker" \
                container="prmDB2">
-+            <lrm_rsc_op id="pgsr02_last_0" operation_key="pgsr02_start_0" \
operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="58:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;58:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n4" call-id="1" rc-code="0" op-status="0" interval="0" \
last-run="1439347909" last-rc-change="1439347909" exec-time="0" queue-time="0" \
op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" op-force-restart=" server " \
                op-restart-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
-+            <lrm_rsc_op id="pgsr02_monitor_30000" \
operation_key="pgsr02_monitor_30000" operation="monitor" \
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="59:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;59:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n4" call-id="2" rc-code="0" op-status="0" interval="30000" \
last-rc-change="1439347937" exec-time="0" queue-time="0" \
                op-digest="02a5bcf940fc8d3239701acb11438d6a"/>
-+            <lrm_rsc_op id="pgsr02_last_failure_0" \
operation_key="pgsr02_monitor_30000" operation="monitor" \
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="59:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="4:1;59:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n4" call-id="2" rc-code="1" op-status="4" interval="30000" \
last-rc-change="1439348019" exec-time="0" queue-time="0" \
                op-digest="02a5bcf940fc8d3239701acb11438d6a"/>
-+          </lrm_resource>
-+        </lrm_resources>
-+      </lrm>
-+      <transient_attributes id="3232261400">
-+        <instance_attributes id="status-3232261400">
-+          <nvpair id="status-3232261400-shutdown" name="shutdown" value="0"/>
-+          <nvpair id="status-3232261400-probe_complete" name="probe_complete" \
                value="true"/>
-+          <nvpair id="status-3232261400-fail-count-pgsr02" name="fail-count-pgsr02" \
                value="1"/>
-+          <nvpair id="status-3232261400-last-failure-pgsr02" \
                name="last-failure-pgsr02" value="1439348019"/>
-+        </instance_attributes>
-+      </transient_attributes>
-+    </node_state>
-+    <node_state id="3232261399" uname="bl460g8n3" in_ccm="true" crmd="online" \
                crm-debug-origin="do_update_resource" join="member" \
                expected="member">
-+      <transient_attributes id="3232261399">
-+        <instance_attributes id="status-3232261399">
-+          <nvpair id="status-3232261399-shutdown" name="shutdown" value="0"/>
-+          <nvpair id="status-3232261399-probe_complete" name="probe_complete" \
                value="true"/>
-+        </instance_attributes>
-+      </transient_attributes>
-+      <lrm id="3232261399">
-+        <lrm_resources>
-+          <lrm_resource id="prmStonith1-2" type="external/ipmi" class="stonith">
-+            <lrm_rsc_op id="prmStonith1-2_last_0" \
operation_key="prmStonith1-2_monitor_0" operation="monitor" \
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="7:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:7;7:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" on_node="bl460g8n3" \
call-id="13" rc-code="7" op-status="0" interval="0" last-run="1439347907" \
last-rc-change="1439347907" exec-time="1" queue-time="0" \
op-digest="afe12b5cfc17d93f044b4e7a9cdbcf9b" op-secure-params=" passwd " \
                op-secure-digest="e865267179ef110b6279ff90ad06de48"/>
-+          </lrm_resource>
-+          <lrm_resource id="prmStonith2-2" type="external/ipmi" class="stonith">
-+            <lrm_rsc_op id="prmStonith2-2_last_0" \
operation_key="prmStonith2-2_start_0" operation="start" \
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="30:3:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;30:3:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n3" call-id="31" rc-code="0" op-status="0" interval="0" \
last-run="1439347907" last-rc-change="1439347907" exec-time="1171" queue-time="0" \
op-digest="92d167320c2a18fe8f3ea285d9f9ee90" op-secure-params=" passwd " \
                op-secure-digest="1149ea95c2c0b76e99fa8fa192158cbf"/>
-+            <lrm_rsc_op id="prmStonith2-2_monitor_3600000" \
operation_key="prmStonith2-2_monitor_3600000" operation="monitor" \
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="19:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;19:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n3" call-id="33" rc-code="0" op-status="0" interval="3600000" \
last-rc-change="1439347908" exec-time="1105" queue-time="0" \
op-digest="3726b87d5cee560876fee49ef2f9ce67" op-secure-params=" passwd " \
                op-secure-digest="1149ea95c2c0b76e99fa8fa192158cbf"/>
-+          </lrm_resource>
-+          <lrm_resource id="vip-master" type="Dummy" class="ocf" \
                provider="heartbeat">
-+            <lrm_rsc_op id="vip-master_last_0" operation_key="vip-master_monitor_0" \
operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="9:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:7;9:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" on_node="bl460g8n3" \
call-id="21" rc-code="7" op-status="0" interval="0" last-run="1439347907" \
last-rc-change="1439347907" exec-time="27" queue-time="0" \
op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" op-force-restart=" state " \
                op-restart-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
-+          </lrm_resource>
-+          <lrm_resource id="vip-rep" type="Dummy" class="ocf" provider="heartbeat">
-+            <lrm_rsc_op id="vip-rep_last_0" operation_key="vip-rep_monitor_0" \
operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="10:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:7;10:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n3" call-id="25" rc-code="7" op-status="0" interval="0" \
last-run="1439347907" last-rc-change="1439347907" exec-time="26" queue-time="0" \
op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" op-force-restart=" state " \
                op-restart-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
-+          </lrm_resource>
-+          <lrm_resource id="pgsql" type="Stateful" class="ocf" \
                provider="heartbeat">
-+            <lrm_rsc_op id="pgsql_last_0" operation_key="pgsql_monitor_0" \
operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="11:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:7;11:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n3" call-id="30" rc-code="7" op-status="0" interval="0" \
last-run="1439347907" last-rc-change="1439347907" exec-time="25" queue-time="0" \
                op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
-+          </lrm_resource>
-+          <lrm_resource id="prmDB2" type="VirtualDomain" class="ocf" \
                provider="heartbeat">
-+            <lrm_rsc_op id="prmDB2_last_0" operation_key="prmDB2_monitor_0" \
operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="6:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:7;6:3:7:6cacb40a-dbbb-49b0-bac7-1794a61d2910" on_node="bl460g8n3" \
call-id="9" rc-code="7" op-status="0" interval="0" last-run="1439347907" \
last-rc-change="1439347907" exec-time="100" queue-time="0" \
                op-digest="4b7b16f20229da9f7b54b8898eb3de9a"/>
-+          </lrm_resource>
-+          <lrm_resource id="prmDB1" type="VirtualDomain" class="ocf" \
                provider="heartbeat">
-+            <lrm_rsc_op id="prmDB1_last_0" operation_key="prmDB1_start_0" \
operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="6:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;6:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" on_node="bl460g8n3" \
call-id="32" rc-code="0" op-status="0" interval="0" last-run="1439347908" \
last-rc-change="1439347908" exec-time="910" queue-time="0" \
                op-digest="04e86e87e00cd62d2fde3a8ec03d5bc1"/>
-+            <lrm_rsc_op id="prmDB1_monitor_10000" \
operation_key="prmDB1_monitor_10000" operation="monitor" \
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="7:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;7:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" on_node="bl460g8n3" \
call-id="34" rc-code="0" op-status="0" interval="10000" last-rc-change="1439347909" \
                exec-time="100" queue-time="0" \
                op-digest="b36f484e0f0d2fd6243f33cbe129b190"/>
-+          </lrm_resource>
-+          <lrm_resource id="pgsr01" type="remote" class="ocf" provider="pacemaker" \
                container="prmDB1">
-+            <lrm_rsc_op id="pgsr01_last_0" operation_key="pgsr01_start_0" \
operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="56:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;56:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n3" call-id="1" rc-code="0" op-status="0" interval="0" \
last-run="1439347909" last-rc-change="1439347909" exec-time="0" queue-time="0" \
op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" op-force-restart=" server " \
                op-restart-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
-+            <lrm_rsc_op id="pgsr01_monitor_30000" \
operation_key="pgsr01_monitor_30000" operation="monitor" \
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="57:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;57:4:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n3" call-id="2" rc-code="0" op-status="0" interval="30000" \
last-rc-change="1439347938" exec-time="0" queue-time="0" \
                op-digest="02a5bcf940fc8d3239701acb11438d6a"/>
-+          </lrm_resource>
-+        </lrm_resources>
-+      </lrm>
-+    </node_state>
-+    <node_state remote_node="true" id="pgsr02" uname="pgsr02" \
                crm-debug-origin="do_update_resource" node_fenced="0">
-+      <transient_attributes id="pgsr02">
-+        <instance_attributes id="status-pgsr02">
-+          <nvpair id="status-pgsr02-master-pgsql" name="master-pgsql" value="10"/>
-+        </instance_attributes>
-+      </transient_attributes>
-+      <lrm id="pgsr02">
-+        <lrm_resources>
-+          <lrm_resource id="pgsql" type="Stateful" class="ocf" \
                provider="heartbeat">
-+            <lrm_rsc_op id="pgsql_last_0" operation_key="pgsql_promote_0" \
operation="promote" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="38:5:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;38:5:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n4" call-id="16" rc-code="0" op-status="0" interval="0" \
last-run="1439347938" last-rc-change="1439347938" exec-time="351" queue-time="0" \
                op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
-+            <lrm_rsc_op id="pgsql_monitor_9000" operation_key="pgsql_monitor_9000" \
operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="40:6:8:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:8;40:6:8:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n4" call-id="28" rc-code="8" op-status="0" interval="9000" \
last-rc-change="1439347938" exec-time="10" queue-time="1" \
                op-digest="873ed4f07792aa8ff18f3254244675ea"/>
-+          </lrm_resource>
-+          <lrm_resource id="vip-master" type="Dummy" class="ocf" \
                provider="heartbeat">
-+            <lrm_rsc_op id="vip-master_last_0" operation_key="vip-master_start_0" \
operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="28:6:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;28:6:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n4" call-id="27" rc-code="0" op-status="0" interval="0" \
last-run="1439347938" last-rc-change="1439347938" exec-time="86" queue-time="0" \
op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" op-force-restart=" state " \
                op-restart-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
-+            <lrm_rsc_op id="vip-master_monitor_10000" \
operation_key="vip-master_monitor_10000" operation="monitor" \
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="29:6:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;29:6:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n4" call-id="29" rc-code="0" op-status="0" interval="10000" \
last-rc-change="1439347938" exec-time="14" queue-time="0" \
                op-digest="873ed4f07792aa8ff18f3254244675ea"/>
-+          </lrm_resource>
-+          <lrm_resource id="vip-rep" type="Dummy" class="ocf" provider="heartbeat">
-+            <lrm_rsc_op id="vip-rep_last_0" operation_key="vip-rep_start_0" \
operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="30:6:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;30:6:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n4" call-id="33" rc-code="0" op-status="0" interval="0" \
last-run="1439347939" last-rc-change="1439347939" exec-time="15" queue-time="0" \
op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8" op-force-restart=" state " \
                op-restart-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
-+            <lrm_rsc_op id="vip-rep_monitor_10000" \
operation_key="vip-rep_monitor_10000" operation="monitor" \
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="31:6:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;31:6:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n4" call-id="34" rc-code="0" op-status="0" interval="10000" \
last-rc-change="1439347939" exec-time="14" queue-time="0" \
                op-digest="873ed4f07792aa8ff18f3254244675ea"/>
-+          </lrm_resource>
-+        </lrm_resources>
-+      </lrm>
-+    </node_state>
-+    <node_state remote_node="true" id="pgsr01" uname="pgsr01" \
                crm-debug-origin="do_update_resource" node_fenced="0">
-+      <transient_attributes id="pgsr01">
-+        <instance_attributes id="status-pgsr01">
-+          <nvpair id="status-pgsr01-master-pgsql" name="master-pgsql" value="10"/>
-+        </instance_attributes>
-+      </transient_attributes>
-+      <lrm id="pgsr01">
-+        <lrm_resources>
-+          <lrm_resource id="pgsql" type="Stateful" class="ocf" \
                provider="heartbeat">
-+            <lrm_rsc_op id="pgsql_last_0" operation_key="pgsql_promote_0" \
operation="promote" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" \
transition-key="42:8:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
transition-magic="0:0;42:8:0:6cacb40a-dbbb-49b0-bac7-1794a61d2910" \
on_node="bl460g8n3" call-id="24" rc-code="0" op-status="0" interval="0" \
last-run="1439348020" last-rc-change="1439348020" exec-time="323" queue-time="0" \
                op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
-+          </lrm_resource>
-+        </lrm_resources>
-+      </lrm>
-+    </node_state>
-+  </status>
-+</cib>
diff --git a/0008-Fix-tools-memory-leak-in-crm_resource.patch \
b/0008-Fix-tools-memory-leak-in-crm_resource.patch deleted file mode 100644
index c29561f..0000000
--- a/0008-Fix-tools-memory-leak-in-crm_resource.patch
+++ /dev/null
@@ -1,33 +0,0 @@
-From: Ken Gaillot <kgaillot@redhat.com>
-Date: Mon, 17 Aug 2015 10:28:19 -0500
-Subject: [PATCH] Fix: tools: memory leak in crm_resource
-
-(cherry picked from commit c11bc4b856b07d5ea5b8284a3d566dd782e6bb7c)
----
- tools/crm_resource_runtime.c | 3 +++
- 1 file changed, 3 insertions(+)
-
-diff --git a/tools/crm_resource_runtime.c b/tools/crm_resource_runtime.c
-index f260e19..b9427bc 100644
---- a/tools/crm_resource_runtime.c
-+++ b/tools/crm_resource_runtime.c
-@@ -399,9 +399,11 @@ cli_resource_delete_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-                             &local_attr_id);
-
-     if (rc == -ENXIO) {
-+        free(lookup_id);
-         return pcmk_ok;
-
-     } else if (rc != pcmk_ok) {
-+        free(lookup_id);
-         return rc;
-     }
-
-@@ -424,6 +426,7 @@ cli_resource_delete_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-                attr_name ? " name=" : "", attr_name ? attr_name : "");
-     }
-
-+    free(lookup_id);
-     free_xml(xml_obj);
-     free(local_attr_id);
-     return rc;
diff --git a/0009-Fix-pengine-The-failed-action-of-the-resource-that-o.patch \
b/0009-Fix-pengine-The-failed-action-of-the-resource-that-o.patch deleted file mode \
100644 index 1ddba9f..0000000
--- a/0009-Fix-pengine-The-failed-action-of-the-resource-that-o.patch
+++ /dev/null
@@ -1,31 +0,0 @@
-From: Hideo Yamauchi <renayama19661014@ybb.ne.jp>
-Date: Fri, 21 Aug 2015 14:12:33 +0900
-Subject: [PATCH] Fix: pengine: The failed action of the resource that occurred
- in shutdown is not displayed.
-
-It is like the problem that entered when you summarized an old judgment
-in function (record_failed_op) by the next correction.
-
-*
-https://github.com/ClusterLabs/pacemaker/commit/9cd666ac15a2998f4543e1dac33edea36bbcf930#diff-7dae505817fa61e544018e581ee45933
                
-
-(cherry picked from commit 119df5c0bd8fac02bd36e45a28288dcf4624b89d)
----
- lib/pengine/unpack.c | 4 +---
- 1 file changed, 1 insertion(+), 3 deletions(-)
-
-diff --git a/lib/pengine/unpack.c b/lib/pengine/unpack.c
-index 0f83be4..156a192 100644
---- a/lib/pengine/unpack.c
-+++ b/lib/pengine/unpack.c
-@@ -2546,9 +2546,7 @@ record_failed_op(xmlNode *op, node_t* node, pe_working_set_t * \
                data_set)
-     xmlNode *xIter = NULL;
-     const char *op_key = crm_element_value(op, XML_LRM_ATTR_TASK_KEY);
-
--    if (node->details->shutdown) {
--        return;
--    } else if(node->details->online == FALSE) {
-+    if ((node->details->shutdown) && (node->details->online == FALSE)) {
-         return;
-     }
-
diff --git a/0010-Log-services-Reduce-severity-of-noisy-log-messages.patch \
b/0010-Log-services-Reduce-severity-of-noisy-log-messages.patch deleted file mode \
100644 index 40aeb8b..0000000
--- a/0010-Log-services-Reduce-severity-of-noisy-log-messages.patch
+++ /dev/null
@@ -1,34 +0,0 @@
-From: "Gao,Yan" <ygao@suse.com>
-Date: Wed, 26 Aug 2015 18:12:56 +0200
-Subject: [PATCH] Log: services: Reduce severity of noisy log messages
-
-They occurred for every monitor operation of systemd resources.
-
-(cherry picked from commit a77c401a3fcdedec165c05d27a75d75abcebf4a1)
----
- lib/services/services.c | 6 +++---
- 1 file changed, 3 insertions(+), 3 deletions(-)
-
-diff --git a/lib/services/services.c b/lib/services/services.c
-index 3f40078..abf1458 100644
---- a/lib/services/services.c
-+++ b/lib/services/services.c
-@@ -366,15 +366,15 @@ services_set_op_pending(svc_action_t *op, DBusPendingCall \
                *pending)
-         if (pending) {
-             crm_info("Lost pending %s DBus call (%p)", op->id, \
                op->opaque->pending);
-         } else {
--            crm_info("Done with pending %s DBus call (%p)", op->id, \
                op->opaque->pending);
-+            crm_trace("Done with pending %s DBus call (%p)", op->id, \
                op->opaque->pending);
-         }
-         dbus_pending_call_unref(op->opaque->pending);
-     }
-     op->opaque->pending = pending;
-     if (pending) {
--        crm_info("Updated pending %s DBus call (%p)", op->id, pending);
-+        crm_trace("Updated pending %s DBus call (%p)", op->id, pending);
-     } else {
--        crm_info("Cleared pending %s DBus call", op->id);
-+        crm_trace("Cleared pending %s DBus call", op->id);
-     }
- }
- #endif
diff --git a/0011-Fix-xml-Mark-xml-nodes-as-dirty-if-any-children-move.patch \
b/0011-Fix-xml-Mark-xml-nodes-as-dirty-if-any-children-move.patch deleted file mode \
100644 index c67a465..0000000
--- a/0011-Fix-xml-Mark-xml-nodes-as-dirty-if-any-children-move.patch
+++ /dev/null
@@ -1,24 +0,0 @@
-From: "Gao,Yan" <ygao@suse.com>
-Date: Wed, 26 Aug 2015 16:28:38 +0200
-Subject: [PATCH] Fix: xml: Mark xml nodes as dirty if any children move
-
-Otherwise if nothing else changed in the new xml, even the versions
-weren't bumped, crm_diff would output an empty xml diff.
-
-(cherry picked from commit 1073786ec24f3bbf26a0f6a5b0614a65edac4301)
----
- lib/common/xml.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/lib/common/xml.c b/lib/common/xml.c
-index 299c7bf..353eb4b 100644
---- a/lib/common/xml.c
-+++ b/lib/common/xml.c
-@@ -4275,6 +4275,7 @@ __xml_diff_object(xmlNode * old, xmlNode * new)
-             if(p_old != p_new) {
-                 crm_info("%s.%s moved from %d to %d - %d",
-                          new_child->name, ID(new_child), p_old, p_new);
-+                __xml_node_dirty(new);
-                 p->flags |= xpf_moved;
-
-                 if(p_old > p_new) {
diff --git a/0012-Feature-crmd-Implement-reliable-event-notifications.patch \
b/0012-Feature-crmd-Implement-reliable-event-notifications.patch deleted file mode \
100644 index 94e3307..0000000
--- a/0012-Feature-crmd-Implement-reliable-event-notifications.patch
+++ /dev/null
@@ -1,565 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Tue, 1 Sep 2015 13:17:45 +1000
-Subject: [PATCH] Feature: crmd: Implement reliable event notifications
-
-(cherry picked from commit 0cd1b8f02b403976afe106e0ca3a8a8a16864c6c)
----
- crmd/Makefile.am            |   2 +-
- crmd/callbacks.c            |   4 +
- crmd/control.c              |  67 +++++++++++++---
- crmd/crmd_utils.h           |   1 +
- crmd/lrm.c                  |   2 +
- crmd/notify.c               | 188 ++++++++++++++++++++++++++++++++++++++++++++
- crmd/notify.h               |  30 +++++++
- crmd/te_utils.c             |   2 +
- cts/CIB.py                  |   2 +
- extra/pcmk_notify_sample.sh |  68 ++++++++++++++++
- include/crm_internal.h      |   1 +
- lib/common/utils.c          |  27 +++++++
- 12 files changed, 380 insertions(+), 14 deletions(-)
- create mode 100644 crmd/notify.c
- create mode 100644 crmd/notify.h
- create mode 100755 extra/pcmk_notify_sample.sh
-
-diff --git a/crmd/Makefile.am b/crmd/Makefile.am
-index 8e5e1df..984f5d0 100644
---- a/crmd/Makefile.am
-+++ b/crmd/Makefile.am
-@@ -28,7 +28,7 @@ noinst_HEADERS	= crmd.h crmd_fsa.h crmd_messages.h fsa_defines.h \
                \
- 		fsa_matrix.h fsa_proto.h crmd_utils.h crmd_callbacks.h \
- 		crmd_lrm.h te_callbacks.h tengine.h
-
--crmd_SOURCES	= main.c crmd.c corosync.c					\
-+crmd_SOURCES	= main.c crmd.c corosync.c notify.c				\
- 		fsa.c control.c messages.c membership.c callbacks.c		\
- 		election.c join_client.c join_dc.c subsystems.c throttle.c	\
- 		cib.c pengine.c tengine.c lrm.c lrm_state.c remote_lrmd_ra.c	\
-diff --git a/crmd/callbacks.c b/crmd/callbacks.c
-index f646927..38fb30b 100644
---- a/crmd/callbacks.c
-+++ b/crmd/callbacks.c
-@@ -126,6 +126,7 @@ peer_update_callback(enum crm_status_type type, crm_node_t * \
                node, const void *d
-         case crm_status_nstate:
-             crm_info("%s is now %s (was %s)",
-                      node->uname, state_text(node->state), state_text(data));
-+
-             if (safe_str_eq(data, node->state)) {
-                 /* State did not change */
-                 return;
-@@ -147,7 +148,10 @@ peer_update_callback(enum crm_status_type type, crm_node_t * \
                node, const void *d
-                     }
-                 }
-             }
-+
-+            crmd_notify_node_event(node);
-             break;
-+
-         case crm_status_processes:
-             if (data) {
-                 old = *(const uint32_t *)data;
-diff --git a/crmd/control.c b/crmd/control.c
-index f4add49..d92f46b 100644
---- a/crmd/control.c
-+++ b/crmd/control.c
-@@ -873,28 +873,64 @@ do_recover(long long action,
-
- /* *INDENT-OFF* */
- pe_cluster_option crmd_opts[] = {
--	/* name, old-name, validate, default, description */
--	{ "dc-version", NULL, "string", NULL, "none", NULL, "Version of Pacemaker on the \
cluster's DC.", "Includes the hash which identifies the exact Mercurial changeset it \
                was built from.  Used for diagnostic purposes." },
--	{ "cluster-infrastructure", NULL, "string", NULL, "heartbeat", NULL, "The \
messaging stack on which Pacemaker is currently running.", "Used for informational \
                and diagnostic purposes." },
--	{ XML_CONFIG_ATTR_DC_DEADTIME, "dc_deadtime", "time", NULL, "20s", &check_time, \
"How long to wait for a response from other nodes during startup.", "The \"correct\" \
value will depend on the speed/load of your network and the type of switches used." \
                },
-+	/* name, old-name, validate, values, default, short description, long description \
                */
-+	{ "dc-version", NULL, "string", NULL, "none", NULL,
-+          "Version of Pacemaker on the cluster's DC.",
-+          "Includes the hash which identifies the exact changeset it was built \
                from.  Used for diagnostic purposes."
-+        },
-+	{ "cluster-infrastructure", NULL, "string", NULL, "heartbeat", NULL,
-+          "The messaging stack on which Pacemaker is currently running.",
-+          "Used for informational and diagnostic purposes." },
-+	{ XML_CONFIG_ATTR_DC_DEADTIME, "dc_deadtime", "time", NULL, "20s", &check_time,
-+          "How long to wait for a response from other nodes during startup.",
-+          "The \"correct\" value will depend on the speed/load of your network and \
                the type of switches used."
-+        },
- 	{ XML_CONFIG_ATTR_RECHECK, "cluster_recheck_interval", "time",
--	  "Zero disables polling.  Positive values are an interval in seconds (unless \
                other SI units are specified. eg. 5min)", "15min", &check_timer,
-+	  "Zero disables polling.  Positive values are an interval in seconds (unless \
                other SI units are specified. eg. 5min)",
-+          "15min", &check_timer,
- 	  "Polling interval for time based changes to options, resource parameters and \
                constraints.",
- 	  "The Cluster is primarily event driven, however the configuration can have \
                elements that change based on time."
--	  "  To ensure these changes take effect, we can optionally poll the cluster's \
                status for changes." },
-+	  "  To ensure these changes take effect, we can optionally poll the cluster's \
                status for changes."
-+        },
-+
-+	{ "notification-script", NULL, "string", NULL, "/dev/null", &check_script,
-+          "Notification script to be called after significant cluster events",
-+          "Full path to a script that will be invoked when resources \
                start/stop/fail, fencing occurs or nodes join/leave the cluster.\n"
-+          "Must exist on all nodes in the cluster."
-+        },
-+	{ "notification-target", NULL, "string", NULL, "", NULL,
-+          "Destination for notifications (Optional)",
-+          "Where should the supplied script send notifications to.  Useful to avoid \
                hard-coding this in the script."
-+        },
-+
- 	{ "load-threshold", NULL, "percentage", NULL, "80%", &check_utilization,
- 	  "The maximum amount of system resources that should be used by nodes in the \
                cluster",
- 	  "The cluster will slow down its recovery process when the amount of system \
                resources used"
--          " (currently CPU) approaches this limit", },
-+          " (currently CPU) approaches this limit",
-+        },
- 	{ "node-action-limit", NULL, "integer", NULL, "0", &check_number,
-           "The maximum number of jobs that can be scheduled per node. Defaults to \
                2x cores"},
--	{ XML_CONFIG_ATTR_ELECTION_FAIL, "election_timeout", "time", NULL, "2min", \
&check_timer, "*** Advanced Use Only ***.", "If need to adjust this value, it \
                probably indicates the presence of a bug." },
--	{ XML_CONFIG_ATTR_FORCE_QUIT, "shutdown_escalation", "time", NULL, "20min", \
&check_timer, "*** Advanced Use Only ***.", "If need to adjust this value, it \
                probably indicates the presence of a bug." },
--	{ "crmd-integration-timeout", NULL, "time", NULL, "3min", &check_timer, "*** \
Advanced Use Only ***.", "If need to adjust this value, it probably indicates the \
                presence of a bug." },
--	{ "crmd-finalization-timeout", NULL, "time", NULL, "30min", &check_timer, "*** \
Advanced Use Only ***.", "If you need to adjust this value, it probably indicates the \
                presence of a bug." },
--	{ "crmd-transition-delay", NULL, "time", NULL, "0s", &check_timer, "*** Advanced \
Use Only ***\nEnabling this option will slow down cluster recovery under all \
conditions", "Delay cluster recovery for the configured interval to allow for \
additional/related events to occur.\nUseful if your configuration is sensitive to the \
                order in which ping updates arrive." },
-+	{ XML_CONFIG_ATTR_ELECTION_FAIL, "election_timeout", "time", NULL, "2min", \
                &check_timer,
-+          "*** Advanced Use Only ***.", "If need to adjust this value, it probably \
                indicates the presence of a bug."
-+        },
-+	{ XML_CONFIG_ATTR_FORCE_QUIT, "shutdown_escalation", "time", NULL, "20min", \
                &check_timer,
-+          "*** Advanced Use Only ***.", "If need to adjust this value, it probably \
                indicates the presence of a bug."
-+        },
-+	{ "crmd-integration-timeout", NULL, "time", NULL, "3min", &check_timer,
-+          "*** Advanced Use Only ***.", "If need to adjust this value, it probably \
                indicates the presence of a bug."
-+        },
-+	{ "crmd-finalization-timeout", NULL, "time", NULL, "30min", &check_timer,
-+          "*** Advanced Use Only ***.", "If you need to adjust this value, it \
                probably indicates the presence of a bug."
-+        },
-+	{ "crmd-transition-delay", NULL, "time", NULL, "0s", &check_timer,
-+          "*** Advanced Use Only ***\n"
-+          "Enabling this option will slow down cluster recovery under all \
                conditions",
-+          "Delay cluster recovery for the configured interval to allow for \
                additional/related events to occur.\n"
-+          "Useful if your configuration is sensitive to the order in which ping \
                updates arrive."
-+        },
- 	{ "stonith-watchdog-timeout", NULL, "time", NULL, NULL, &check_timer,
--	  "How long to wait before we can assume nodes are safely down", NULL },
-+	  "How long to wait before we can assume nodes are safely down", NULL
-+        },
- 	{ "no-quorum-policy", "no_quorum_policy", "enum", "stop, freeze, ignore, suicide", \
                "stop", &check_quorum, NULL, NULL },
-
- #if SUPPORT_PLUGIN
-@@ -927,6 +963,7 @@ crmd_pref(GHashTable * options, const char *name)
- static void
- config_query_callback(xmlNode * msg, int call_id, int rc, xmlNode * output, void \
                *user_data)
- {
-+    const char *script = NULL;
-     const char *value = NULL;
-     GHashTable *config_hash = NULL;
-     crm_time_t *now = crm_time_new(NULL);
-@@ -955,6 +992,10 @@ config_query_callback(xmlNode * msg, int call_id, int rc, \
                xmlNode * output, void
-
-     verify_crmd_options(config_hash);
-
-+    script = crmd_pref(config_hash, "notification-script");
-+    value  = crmd_pref(config_hash, "notification-target");
-+    crmd_enable_notifications(script, value);
-+
-     value = crmd_pref(config_hash, XML_CONFIG_ATTR_DC_DEADTIME);
-     election_trigger->period_ms = crm_get_msec(value);
-
-diff --git a/crmd/crmd_utils.h b/crmd/crmd_utils.h
-index 78214bf..7e8c3e6 100644
---- a/crmd/crmd_utils.h
-+++ b/crmd/crmd_utils.h
-@@ -21,6 +21,7 @@
- #  include <crm/crm.h>
- #  include <crm/common/xml.h>
- #  include <crm/cib/internal.h> /* For CIB_OP_MODIFY */
-+#  include "notify.h"
-
- #  define CLIENT_EXIT_WAIT 30
- #  define FAKE_TE_ID	"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
-diff --git a/crmd/lrm.c b/crmd/lrm.c
-index 418e7cf..48195e8 100644
---- a/crmd/lrm.c
-+++ b/crmd/lrm.c
-@@ -2415,6 +2415,8 @@ process_lrm_event(lrm_state_t * lrm_state, lrmd_event_data_t * \
                op, struct recurr
-         free(prefix);
-     }
-
-+    crmd_notify_resource_op(lrm_state->node_name, op);
-+
-     if (op->rsc_deleted) {
-         crm_info("Deletion of resource '%s' complete after %s", op->rsc_id, \
                op_key);
-         delete_rsc_entry(lrm_state, NULL, op->rsc_id, NULL, pcmk_ok, NULL);
-diff --git a/crmd/notify.c b/crmd/notify.c
-new file mode 100644
-index 0000000..980bfa6
---- /dev/null
-+++ b/crmd/notify.c
-@@ -0,0 +1,188 @@
-+/*
-+ * Copyright (C) 2015 Andrew Beekhof <andrew@beekhof.net>
-+ *
-+ * This program is free software; you can redistribute it and/or
-+ * modify it under the terms of the GNU General Public
-+ * License as published by the Free Software Foundation; either
-+ * version 2 of the License, or (at your option) any later version.
-+ *
-+ * This software is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-+ * General Public License for more details.
-+ *
-+ * You should have received a copy of the GNU General Public
-+ * License along with this library; if not, write to the Free Software
-+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
-+ */
-+
-+#include <crm_internal.h>
-+#include <crm/crm.h>
-+#include <crm/msg_xml.h>
-+#include "notify.h"
-+
-+char *notify_script = NULL;
-+char *notify_target = NULL;
-+
-+
-+static const char *notify_keys[] -+{
-+    "CRM_notify_recipient",
-+    "CRM_notify_node",
-+    "CRM_notify_rsc",
-+    "CRM_notify_task",
-+    "CRM_notify_interval",
-+    "CRM_notify_desc",
-+    "CRM_notify_status",
-+    "CRM_notify_target_rc",
-+    "CRM_notify_rc",
-+    "CRM_notify_kind",
-+    "CRM_notify_version",
-+};
-+
-+
-+void
-+crmd_enable_notifications(const char *script, const char *target)
-+{
-+    free(notify_script);
-+    notify_script = NULL;
-+
-+    free(notify_target);
-+    notify_target = NULL;
-+
-+    if(safe_str_eq(script, "/dev/null")) {
-+        crm_notice("Notifications disabled");
-+        return;
-+    }
-+
-+    notify_script = strdup(script);
-+    notify_target = strdup(target);
-+    crm_notice("Notifications enabled");
-+}
-+
-+static void
-+set_notify_key(const char *name, const char *cvalue, char *value)
-+{
-+    int lpc;
-+    bool found = 0;
-+
-+    if(cvalue == NULL) {
-+        cvalue = value;
-+    }
-+
-+    for(lpc = 0; lpc < DIMOF(notify_keys); lpc++) {
-+        if(safe_str_eq(name, notify_keys[lpc])) {
-+            found = 1;
-+            crm_trace("Setting notify key %s = '%s'", name, cvalue);
-+            setenv(name, cvalue, 1);
-+            break;
-+        }
-+    }
-+
-+    CRM_ASSERT(found != 0);
-+    free(value);
-+}
-+
-+
-+static void
-+send_notification(const char *kind)
-+{
-+    int lpc;
-+    pid_t pid;
-+
-+    crm_debug("Sending '%s' notification to '%s' via '%s'", kind, notify_target, \
                notify_script);
-+
-+    set_notify_key("CRM_notify_recipient", notify_target, NULL);
-+    set_notify_key("CRM_notify_kind", kind, NULL);
-+    set_notify_key("CRM_notify_version", VERSION, NULL);
-+
-+    pid = fork();
-+    if (pid == -1) {
-+        crm_perror(LOG_ERR, "notification failed");
-+    }
-+
-+    if (pid == 0) {
-+        /* crm_debug("notification: I am the child. Executing the nofitication \
                program."); */
-+        execl(notify_script, notify_script, NULL);
-+        exit(EXIT_FAILURE);
-+
-+    } else {
-+        for(lpc = 0; lpc < DIMOF(notify_keys); lpc++) {
-+            unsetenv(notify_keys[lpc]);
-+        }
-+    }
-+}
-+
-+void crmd_notify_node_event(crm_node_t *node)
-+{
-+    if(notify_script == NULL) {
-+        return;
-+    }
-+
-+    set_notify_key("CRM_notify_node", node->uname, NULL);
-+    set_notify_key("CRM_notify_desc", node->state, NULL);
-+
-+    send_notification("node");
-+}
-+
-+void
-+crmd_notify_fencing_op(stonith_event_t * e)
-+{
-+    char *desc = NULL;
-+
-+    if(notify_script) {
-+        return;
-+    }
-+
-+    desc = crm_strdup_printf("Operation %s requested by %s for peer %s: %s \
                (ref=%s)",
-+                                   e->operation, e->origin, e->target, \
                pcmk_strerror(e->result),
-+                                   e->id);
-+
-+    set_notify_key("CRM_notify_node", e->target, NULL);
-+    set_notify_key("CRM_notify_task", e->operation, NULL);
-+    set_notify_key("CRM_notify_desc", NULL, desc);
-+    set_notify_key("CRM_notify_rc", NULL, crm_itoa(e->result));
-+
-+    send_notification("fencing");
-+}
-+
-+void
-+crmd_notify_resource_op(const char *node, lrmd_event_data_t * op)
-+{
-+    int target_rc = 0;
-+
-+    if(notify_script == NULL) {
-+        return;
-+    }
-+
-+    target_rc = rsc_op_expected_rc(op);
-+    if(op->interval == 0 && target_rc == op->rc && safe_str_eq(op->op_type, \
                RSC_STATUS)) {
-+        /* Leave it up to the script if they want to notify for
-+         * 'failed' probes, only swallow ones for which the result was
-+         * unexpected.
-+         *
-+         * Even if we find a resource running, it was probably because
-+         * someone erased the status section.
-+         */
-+        return;
-+    }
-+
-+    set_notify_key("CRM_notify_node", node, NULL);
-+
-+    set_notify_key("CRM_notify_rsc", op->rsc_id, NULL);
-+    set_notify_key("CRM_notify_task", op->op_type, NULL);
-+    set_notify_key("CRM_notify_interval", NULL, crm_itoa(op->interval));
-+
-+    set_notify_key("CRM_notify_target_rc", NULL, crm_itoa(target_rc));
-+    set_notify_key("CRM_notify_status", NULL, crm_itoa(op->op_status));
-+    set_notify_key("CRM_notify_rc", NULL, crm_itoa(op->rc));
-+
-+    if(op->op_status == PCMK_LRM_OP_DONE) {
-+        set_notify_key("CRM_notify_desc", services_ocf_exitcode_str(op->rc), NULL);
-+    } else {
-+        set_notify_key("CRM_notify_desc", services_lrm_status_str(op->op_status), \
                NULL);
-+    }
-+
-+    send_notification("resource");
-+}
-+
-diff --git a/crmd/notify.h b/crmd/notify.h
-new file mode 100644
-index 0000000..4b138ea
---- /dev/null
-+++ b/crmd/notify.h
-@@ -0,0 +1,30 @@
-+/*
-+ * Copyright (C) 2015 Andrew Beekhof <andrew@beekhof.net>
-+ *
-+ * This program is free software; you can redistribute it and/or
-+ * modify it under the terms of the GNU General Public
-+ * License as published by the Free Software Foundation; either
-+ * version 2 of the License, or (at your option) any later version.
-+ *
-+ * This software is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-+ * General Public License for more details.
-+ *
-+ * You should have received a copy of the GNU General Public
-+ * License along with this library; if not, write to the Free Software
-+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
-+ */
-+#ifndef CRMD_NOTIFY__H
-+#  define CRMD_NOTIFY__H
-+
-+#  include <crm/crm.h>
-+#  include <crm/cluster.h>
-+#  include <crm/stonith-ng.h>
-+
-+void crmd_enable_notifications(const char *script, const char *target);
-+void crmd_notify_node_event(crm_node_t *node);
-+void crmd_notify_fencing_op(stonith_event_t * e);
-+void crmd_notify_resource_op(const char *node, lrmd_event_data_t * op);
-+
-+#endif
-diff --git a/crmd/te_utils.c b/crmd/te_utils.c
-index a1d29f6..22551ba 100644
---- a/crmd/te_utils.c
-+++ b/crmd/te_utils.c
-@@ -124,6 +124,8 @@ tengine_stonith_notify(stonith_t * st, stonith_event_t * \
                st_event)
-         return;
-     }
-
-+    crmd_notify_fencing_op(st_event);
-+
-     if (st_event->result == pcmk_ok && safe_str_eq("on", st_event->action)) {
-         crm_notice("%s was successfully unfenced by %s (at the request of %s)",
-                    st_event->target, st_event->executioner ? st_event->executioner \
                : "<anyone>", st_event->origin);
-diff --git a/cts/CIB.py b/cts/CIB.py
-index 8fbba6c..cd3a6a1 100644
---- a/cts/CIB.py
-+++ b/cts/CIB.py
-@@ -219,6 +219,8 @@ class CIB11(ConfigBase):
-         o["dc-deadtime"] = "5s"
-         o["no-quorum-policy"] = no_quorum
-         o["expected-quorum-votes"] = self.num_nodes
-+        o["notification-script"] = "/var/lib/pacemaker/notify.sh"
-+        o["notification-target"] = "/var/lib/pacemaker/notify.log"
-
-         if self.CM.Env["DoBSC"] == 1:
-             o["ident-string"] = "Linux-HA TEST configuration file - REMOVEME!!"
-diff --git a/extra/pcmk_notify_sample.sh b/extra/pcmk_notify_sample.sh
-new file mode 100755
-index 0000000..83cf8e9
---- /dev/null
-+++ b/extra/pcmk_notify_sample.sh
-@@ -0,0 +1,68 @@
-+#!/bin/bash
-+#
-+# Copyright (C) 2015 Andrew Beekhof <andrew@beekhof.net>
-+#
-+# This program is free software; you can redistribute it and/or
-+# modify it under the terms of the GNU General Public
-+# License as published by the Free Software Foundation; either
-+# version 2 of the License, or (at your option) any later version.
-+#
-+# This software is distributed in the hope that it will be useful,
-+# but WITHOUT ANY WARRANTY; without even the implied warranty of
-+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-+# General Public License for more details.
-+#
-+# You should have received a copy of the GNU General Public
-+# License along with this library; if not, write to the Free Software
-+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
-+
-+if [ -z $CRM_notify_version ]; then
-+    echo "Pacemaker version 1.1.14 is required" >> ${CRM_notify_recipient}
-+    exit 0
-+fi
-+
-+case $CRM_notify_kind in
-+    node)
-+	echo "Node '${CRM_notify_node}' is now '${CRM_notify_desc}'" >> \
                ${CRM_notify_recipient}
-+	;;
-+    fencing)
-+	# Other keys:
-+	#
-+	# CRM_notify_node
-+	# CRM_notify_task
-+	# CRM_notify_rc
-+	#
-+	echo "Fencing ${CRM_notify_desc}" >> ${CRM_notify_recipient}
-+	;;
-+    resource)
-+	# Other keys:
-+	#
-+	# CRM_notify_target_rc
-+	# CRM_notify_status
-+	# CRM_notify_rc
-+	#
-+	if [ ${CRM_notify_interval} = "0" ]; then
-+	    CRM_notify_interval=""
-+	else
-+	    CRM_notify_interval=" (${CRM_notify_interval})"
-+	fi
-+
-+	if [ ${CRM_notify_target_rc} = "0" ]; then
-+	    CRM_notify_target_rc=""
-+	else
-+	    CRM_notify_target_rc=" (target: ${CRM_notify_target_rc})"
-+	fi
-+
-+	case ${CRM_notify_desc} in
-+	    Cancelled) ;;
-+	    *)
-+		echo "Resource operation '${CRM_notify_task}${CRM_notify_interval}' for \
'${CRM_notify_rsc}' on '${CRM_notify_node}': \
                ${CRM_notify_desc}${CRM_notify_target_rc}" >> ${CRM_notify_recipient}
-+		;;
-+	esac
-+	;;
-+    *)
-+        echo "Unhandled $CRM_notify_kind notification" >> ${CRM_notify_recipient}
-+	env | grep CRM_notify >> ${CRM_notify_recipient}
-+        ;;
-+
-+esac
-diff --git a/include/crm_internal.h b/include/crm_internal.h
-index c13bc7b..fb03537 100644
---- a/include/crm_internal.h
-+++ b/include/crm_internal.h
-@@ -127,6 +127,7 @@ gboolean check_timer(const char *value);
- gboolean check_boolean(const char *value);
- gboolean check_number(const char *value);
- gboolean check_quorum(const char *value);
-+gboolean check_script(const char *value);
- gboolean check_utilization(const char *value);
-
- /* Shared PE/crmd functionality */
-diff --git a/lib/common/utils.c b/lib/common/utils.c
-index 6a234dc..628cf2f 100644
---- a/lib/common/utils.c
-+++ b/lib/common/utils.c
-@@ -180,6 +180,33 @@ check_quorum(const char *value)
- }
-
- gboolean
-+check_script(const char *value)
-+{
-+    struct stat st;
-+
-+    if(safe_str_eq(value, "/dev/null")) {
-+        return TRUE;
-+    }
-+
-+    if(stat(value, &st) != 0) {
-+        crm_err("Script %s does not exist", value);
-+        return FALSE;
-+    }
-+
-+    if(S_ISREG(st.st_mode) == 0) {
-+        crm_err("Script %s is not a regular file", value);
-+        return FALSE;
-+    }
-+
-+    if( (st.st_mode & (S_IXUSR | S_IXGRP )) == 0) {
-+        crm_err("Script %s is not executable", value);
-+        return FALSE;
-+    }
-+
-+    return TRUE;
-+}
-+
-+gboolean
- check_utilization(const char *value)
- {
-     char *end = NULL;
diff --git a/0013-Fix-cman-Suppress-implied-node-names.patch \
b/0013-Fix-cman-Suppress-implied-node-names.patch deleted file mode 100644
index eb14b0d..0000000
--- a/0013-Fix-cman-Suppress-implied-node-names.patch
+++ /dev/null
@@ -1,47 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Wed, 2 Sep 2015 12:08:52 +1000
-Subject: [PATCH] Fix: cman: Suppress implied node names
-
-(cherry picked from commit e94fbcd0c49db9d3c69b7c0e478ba89a4d360dde)
----
- tools/crm_node.c | 20 +++++++++++++++++++-
- 1 file changed, 19 insertions(+), 1 deletion(-)
-
-diff --git a/tools/crm_node.c b/tools/crm_node.c
-index d0195e3..24cc4d7 100644
---- a/tools/crm_node.c
-+++ b/tools/crm_node.c
-@@ -434,6 +434,21 @@ try_heartbeat(int command, enum cluster_type_e stack)
- #if SUPPORT_CMAN
- #  include <libcman.h>
- #  define MAX_NODES 256
-+static bool valid_cman_name(const char *name, uint32_t nodeid)
-+{
-+    bool rc = TRUE;
-+
-+    /* Yes, %d, because that's what CMAN does */
-+    char *fakename = crm_strdup_printf("Node%d", nodeid);
-+
-+    if(crm_str_eq(fakename, name, TRUE)) {
-+        rc = FALSE;
-+        crm_notice("Ignoring inferred name from cman: %s", fakename);
-+    }
-+    free(fakename);
-+    return rc;
-+}
-+
- static gboolean
- try_cman(int command, enum cluster_type_e stack)
- {
-@@ -478,7 +493,10 @@ try_cman(int command, enum cluster_type_e stack)
-             }
-
-             for (lpc = 0; lpc < node_count; lpc++) {
--                if (command == 'l') {
-+                if(valid_cman_name(cman_nodes[lpc].cn_name, \
                cman_nodes[lpc].cn_nodeid) == FALSE) {
-+                    /* Do not print */
-+
-+                } if (command == 'l') {
-                     printf("%s ", cman_nodes[lpc].cn_name);
-
-                 } else if (cman_nodes[lpc].cn_nodeid != 0 && \
                cman_nodes[lpc].cn_member) {
diff --git a/0014-Fix-crmd-Choose-more-appropriate-names-for-notificat.patch \
b/0014-Fix-crmd-Choose-more-appropriate-names-for-notificat.patch deleted file mode \
100644 index 2a12849..0000000
--- a/0014-Fix-crmd-Choose-more-appropriate-names-for-notificat.patch
+++ /dev/null
@@ -1,58 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Wed, 2 Sep 2015 14:32:40 +1000
-Subject: [PATCH] Fix: crmd: Choose more appropriate names for notification
- options
-
-(cherry picked from commit 8971ef024ffebf3d0240b30e620697a7b58232c4)
----
- crmd/control.c | 12 ++++++------
- cts/CIB.py     |  4 ++--
- 2 files changed, 8 insertions(+), 8 deletions(-)
-
-diff --git a/crmd/control.c b/crmd/control.c
-index d92f46b..d1f9acd 100644
---- a/crmd/control.c
-+++ b/crmd/control.c
-@@ -893,12 +893,12 @@ pe_cluster_option crmd_opts[] = {
- 	  "  To ensure these changes take effect, we can optionally poll the cluster's \
                status for changes."
-         },
-
--	{ "notification-script", NULL, "string", NULL, "/dev/null", &check_script,
--          "Notification script to be called after significant cluster events",
--          "Full path to a script that will be invoked when resources \
                start/stop/fail, fencing occurs or nodes join/leave the cluster.\n"
-+	{ "notification-agent", NULL, "string", NULL, "/dev/null", &check_script,
-+          "Notification script or tool to be called after significant cluster \
                events",
-+          "Full path to a script or binary that will be invoked when resources \
                start/stop/fail, fencing occurs or nodes join/leave the cluster.\n"
-           "Must exist on all nodes in the cluster."
-         },
--	{ "notification-target", NULL, "string", NULL, "", NULL,
-+	{ "notification-recipient", NULL, "string", NULL, "", NULL,
-           "Destination for notifications (Optional)",
-           "Where should the supplied script send notifications to.  Useful to avoid \
                hard-coding this in the script."
-         },
-@@ -992,8 +992,8 @@ config_query_callback(xmlNode * msg, int call_id, int rc, \
                xmlNode * output, void
-
-     verify_crmd_options(config_hash);
-
--    script = crmd_pref(config_hash, "notification-script");
--    value  = crmd_pref(config_hash, "notification-target");
-+    script = crmd_pref(config_hash, "notification-agent");
-+    value  = crmd_pref(config_hash, "notification-recipient");
-     crmd_enable_notifications(script, value);
-
-     value = crmd_pref(config_hash, XML_CONFIG_ATTR_DC_DEADTIME);
-diff --git a/cts/CIB.py b/cts/CIB.py
-index cd3a6a1..0933ccd 100644
---- a/cts/CIB.py
-+++ b/cts/CIB.py
-@@ -219,8 +219,8 @@ class CIB11(ConfigBase):
-         o["dc-deadtime"] = "5s"
-         o["no-quorum-policy"] = no_quorum
-         o["expected-quorum-votes"] = self.num_nodes
--        o["notification-script"] = "/var/lib/pacemaker/notify.sh"
--        o["notification-target"] = "/var/lib/pacemaker/notify.log"
-+        o["notification-agent"] = "/var/lib/pacemaker/notify.sh"
-+        o["notification-recipient"] = "/var/lib/pacemaker/notify.log"
-
-         if self.CM.Env["DoBSC"] == 1:
-             o["ident-string"] = "Linux-HA TEST configuration file - REMOVEME!!"
diff --git a/0015-Fix-crmd-Correctly-enable-disable-notifications.patch \
b/0015-Fix-crmd-Correctly-enable-disable-notifications.patch deleted file mode 100644
index 575f6ea..0000000
--- a/0015-Fix-crmd-Correctly-enable-disable-notifications.patch
+++ /dev/null
@@ -1,22 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Wed, 2 Sep 2015 14:48:17 +1000
-Subject: [PATCH] Fix: crmd: Correctly enable/disable notifications
-
-(cherry picked from commit 7368cf120cd5ee848d2bdcd788497a3b89616b05)
----
- crmd/notify.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/crmd/notify.c b/crmd/notify.c
-index 980bfa6..ccf5ea8 100644
---- a/crmd/notify.c
-+++ b/crmd/notify.c
-@@ -50,7 +50,7 @@ crmd_enable_notifications(const char *script, const char *target)
-     free(notify_target);
-     notify_target = NULL;
-
--    if(safe_str_eq(script, "/dev/null")) {
-+    if(script == NULL || safe_str_eq(script, "/dev/null")) {
-         crm_notice("Notifications disabled");
-         return;
-     }
diff --git a/0016-Fix-crmd-Report-the-completion-status-and-output-of-.patch \
b/0016-Fix-crmd-Report-the-completion-status-and-output-of-.patch deleted file mode \
100644 index e7bc0e3..0000000
--- a/0016-Fix-crmd-Report-the-completion-status-and-output-of-.patch
+++ /dev/null
@@ -1,109 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Wed, 2 Sep 2015 14:34:04 +1000
-Subject: [PATCH] Fix: crmd: Report the completion status and output of
- notifications
-
-(cherry picked from commit 0c303d8a6f9f9a9dbec9f6d2e9e799fe335f8eaa)
----
- crmd/notify.c           | 37 ++++++++++++++++++++++++-------------
- lib/services/services.c |  4 ++--
- 2 files changed, 26 insertions(+), 15 deletions(-)
-
-diff --git a/crmd/notify.c b/crmd/notify.c
-index ccf5ea8..ca2be0f 100644
---- a/crmd/notify.c
-+++ b/crmd/notify.c
-@@ -29,6 +29,7 @@ static const char *notify_keys[] - {
-     "CRM_notify_recipient",
-     "CRM_notify_node",
-+    "CRM_notify_nodeid",
-     "CRM_notify_rsc",
-     "CRM_notify_task",
-     "CRM_notify_interval",
-@@ -83,12 +84,21 @@ set_notify_key(const char *name, const char *cvalue, char \
                *value)
-     free(value);
- }
-
-+static void crmd_notify_complete(svc_action_t *op)
-+{
-+    if(op->rc == 0) {
-+        crm_info("Notification %d (%s) complete", op->sequence, op->agent);
-+    } else {
-+        crm_warn("Notification %d (%s) failed: %d", op->sequence, op->agent, \
                op->rc);
-+    }
-+}
-
- static void
- send_notification(const char *kind)
- {
-     int lpc;
--    pid_t pid;
-+    svc_action_t *notify = NULL;
-+    static int operations = 0;
-
-     crm_debug("Sending '%s' notification to '%s' via '%s'", kind, notify_target, \
                notify_script);
-
-@@ -96,20 +106,20 @@ send_notification(const char *kind)
-     set_notify_key("CRM_notify_kind", kind, NULL);
-     set_notify_key("CRM_notify_version", VERSION, NULL);
-
--    pid = fork();
--    if (pid == -1) {
--        crm_perror(LOG_ERR, "notification failed");
--    }
-+    notify = services_action_create_generic(notify_script, NULL);
-
--    if (pid == 0) {
--        /* crm_debug("notification: I am the child. Executing the nofitication \
                program."); */
--        execl(notify_script, notify_script, NULL);
--        exit(EXIT_FAILURE);
-+    notify->timeout = 300;
-+    notify->standard = strdup("event");
-+    notify->id = strdup(notify_script);
-+    notify->agent = strdup(notify_script);
-+    notify->sequence = ++operations;
-
--    } else {
--        for(lpc = 0; lpc < DIMOF(notify_keys); lpc++) {
--            unsetenv(notify_keys[lpc]);
--        }
-+    if(services_action_async(notify, &crmd_notify_complete) == FALSE) {
-+        services_action_free(notify);
-+    }
-+
-+    for(lpc = 0; lpc < DIMOF(notify_keys); lpc++) {
-+        unsetenv(notify_keys[lpc]);
-     }
- }
-
-@@ -120,6 +130,7 @@ void crmd_notify_node_event(crm_node_t *node)
-     }
-
-     set_notify_key("CRM_notify_node", node->uname, NULL);
-+    set_notify_key("CRM_notify_nodeid", NULL, crm_itoa(node->id));
-     set_notify_key("CRM_notify_desc", node->state, NULL);
-
-     send_notification("node");
-diff --git a/lib/services/services.c b/lib/services/services.c
-index abf1458..4609a7d 100644
---- a/lib/services/services.c
-+++ b/lib/services/services.c
-@@ -598,7 +598,7 @@ action_async_helper(svc_action_t * op) {
-     }
-
-     /* keep track of ops that are in-flight to avoid collisions in the same \
                namespace */
--    if (res) {
-+    if (op->rsc && res) {
-         inflight_ops = g_list_append(inflight_ops, op);
-     }
-
-@@ -622,7 +622,7 @@ services_action_async(svc_action_t * op, void (*action_callback) \
                (svc_action_t *
-         g_hash_table_replace(recurring_actions, op->id, op);
-     }
-
--    if (is_op_blocked(op->rsc)) {
-+    if (op->rsc && is_op_blocked(op->rsc)) {
-         blocked_ops = g_list_append(blocked_ops, op);
-         return TRUE;
-     }
diff --git a/0017-Fix-cman-Print-the-nodeid-of-nodes-with-fake-names.patch \
b/0017-Fix-cman-Print-the-nodeid-of-nodes-with-fake-names.patch deleted file mode \
100644 index b627349..0000000
--- a/0017-Fix-cman-Print-the-nodeid-of-nodes-with-fake-names.patch
+++ /dev/null
@@ -1,23 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Thu, 3 Sep 2015 10:58:59 +1000
-Subject: [PATCH] Fix: cman: Print the nodeid of nodes with fake names
-
-(cherry picked from commit dd9a379408aa43b89c81d31ce7efa60b2e77f593)
----
- tools/crm_node.c | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/tools/crm_node.c b/tools/crm_node.c
-index 24cc4d7..ed02ee7 100644
---- a/tools/crm_node.c
-+++ b/tools/crm_node.c
-@@ -494,7 +494,8 @@ try_cman(int command, enum cluster_type_e stack)
-
-             for (lpc = 0; lpc < node_count; lpc++) {
-                 if(valid_cman_name(cman_nodes[lpc].cn_name, \
                cman_nodes[lpc].cn_nodeid) == FALSE) {
--                    /* Do not print */
-+                    /* The name was invented, but we need to print something, make \
                it the id instead */
-+                    printf("%u ", cman_nodes[lpc].cn_nodeid);
-
-                 } if (command == 'l') {
-                     printf("%s ", cman_nodes[lpc].cn_name);
diff --git a/0018-Refactor-Tools-Isolate-the-paths-which-truely-requir.patch \
b/0018-Refactor-Tools-Isolate-the-paths-which-truely-requir.patch deleted file mode \
100644 index 2fbd35e..0000000
--- a/0018-Refactor-Tools-Isolate-the-paths-which-truely-requir.patch
+++ /dev/null
@@ -1,299 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Thu, 3 Sep 2015 11:36:21 +1000
-Subject: [PATCH] Refactor: Tools: Isolate the paths which truely require
- corosync-2.x
-
-(cherry picked from commit 32c05b99f6a3e953668dcda71ce24e03927d83cb)
----
- tools/crm_node.c | 243 +++++++++++++++++++++++++++++++------------------------
- 1 file changed, 139 insertions(+), 104 deletions(-)
-
-diff --git a/tools/crm_node.c b/tools/crm_node.c
-index ed02ee7..308d4f9 100644
---- a/tools/crm_node.c
-+++ b/tools/crm_node.c
-@@ -60,6 +60,9 @@ static struct crm_option long_options[] = {
- #if SUPPORT_COROSYNC
-     {"openais",    0, 0, 'A', "\tOnly try connecting to an OpenAIS-based cluster"},
- #endif
-+#ifdef SUPPORT_CS_QUORUM
-+    {"corosync",   0, 0, 'C', "\tOnly try connecting to an Corosync-based \
                cluster"},
-+#endif
- #ifdef SUPPORT_HEARTBEAT
-     {"heartbeat",  0, 0, 'H', "Only try connecting to a Heartbeat-based cluster"},
- #endif
-@@ -223,6 +226,138 @@ int tools_remove_node_cache(const char *node, const char \
                *target)
-     return rc > 0 ? 0 : rc;
- }
-
-+static gint
-+compare_node_uname(gconstpointer a, gconstpointer b)
-+{
-+    const crm_node_t *a_node = a;
-+    const crm_node_t *b_node = b;
-+    return strcmp(a_node->uname?a_node->uname:"", b_node->uname?b_node->uname:"");
-+}
-+
-+static int
-+node_mcp_dispatch(const char *buffer, ssize_t length, gpointer userdata)
-+{
-+    xmlNode *msg = string2xml(buffer);
-+
-+    if (msg) {
-+        xmlNode *node = NULL;
-+        GListPtr nodes = NULL;
-+        GListPtr iter = NULL;
-+
-+        crm_log_xml_trace(msg, "message");
-+
-+        for (node = __xml_first_child(msg); node != NULL; node = __xml_next(node)) \
                {
-+            crm_node_t *peer = calloc(1, sizeof(crm_node_t));
-+
-+            nodes = g_list_insert_sorted(nodes, peer, compare_node_uname);
-+            peer->uname = (char*)crm_element_value_copy(node, "uname");
-+            peer->state = (char*)crm_element_value_copy(node, "state");
-+            crm_element_value_int(node, "id", (int*)&peer->id);
-+        }
-+
-+        for(iter = nodes; iter; iter = iter->next) {
-+            crm_node_t *peer = iter->data;
-+            if (command == 'l') {
-+                fprintf(stdout, "%u %s %s\n", peer->id, peer->uname, peer->state);
-+
-+            } else if (command == 'p') {
-+                if(safe_str_eq(peer->state, CRM_NODE_MEMBER)) {
-+                    fprintf(stdout, "%s ", peer->uname);
-+                }
-+
-+            } else if (command == 'i') {
-+                if(safe_str_eq(peer->state, CRM_NODE_MEMBER)) {
-+                    fprintf(stdout, "%u ", peer->id);
-+                }
-+            }
-+        }
-+
-+        g_list_free_full(nodes, free);
-+        free_xml(msg);
-+
-+        if (command == 'p') {
-+            fprintf(stdout, "\n");
-+        }
-+
-+        crm_exit(pcmk_ok);
-+    }
-+
-+    return 0;
-+}
-+
-+static void
-+node_mcp_destroy(gpointer user_data)
-+{
-+    crm_exit(ENOTCONN);
-+}
-+
-+static gboolean
-+try_pacemaker(int command, enum cluster_type_e stack)
-+{
-+    struct ipc_client_callbacks node_callbacks = {
-+        .dispatch = node_mcp_dispatch,
-+        .destroy = node_mcp_destroy
-+    };
-+
-+    if (stack == pcmk_cluster_heartbeat) {
-+        /* Nothing to do for them */
-+        return FALSE;
-+    }
-+
-+    switch (command) {
-+        case 'e':
-+            /* Age only applies to heartbeat clusters */
-+            fprintf(stdout, "1\n");
-+            crm_exit(pcmk_ok);
-+
-+        case 'q':
-+            /* Implement one day?
-+             * Wouldn't be much for pacemakerd to track it and include in the poke \
                reply
-+             */
-+            return FALSE;
-+
-+        case 'R':
-+            {
-+                int lpc = 0;
-+                const char *daemons[] = {
-+                    CRM_SYSTEM_CRMD,
-+                    "stonith-ng",
-+                    T_ATTRD,
-+                    CRM_SYSTEM_MCP,
-+                };
-+
-+                for(lpc = 0; lpc < DIMOF(daemons); lpc++) {
-+                    if (tools_remove_node_cache(target_uname, daemons[lpc])) {
-+                        crm_err("Failed to connect to %s to remove node '%s'", \
                daemons[lpc], target_uname);
-+                        crm_exit(pcmk_err_generic);
-+                    }
-+                }
-+                crm_exit(pcmk_ok);
-+            }
-+            break;
-+
-+        case 'i':
-+        case 'l':
-+        case 'p':
-+            /* Go to pacemakerd */
-+            {
-+                GMainLoop *amainloop = g_main_new(FALSE);
-+                mainloop_io_t *ipc -+                    \
mainloop_add_ipc_client(CRM_SYSTEM_MCP, G_PRIORITY_DEFAULT, 0, NULL, \
                &node_callbacks);
-+                if (ipc != NULL) {
-+                    /* Sending anything will get us a list of nodes */
-+                    xmlNode *poke = create_xml_node(NULL, "poke");
-+
-+                    crm_ipc_send(mainloop_get_ipc_client(ipc), poke, 0, 0, NULL);
-+                    free_xml(poke);
-+                    g_main_run(amainloop);
-+                }
-+            }
-+            break;
-+    }
-+    return FALSE;
-+}
-+
- #if SUPPORT_HEARTBEAT
- #  include <ocf/oc_event.h>
- #  include <ocf/oc_membership.h>
-@@ -626,66 +761,6 @@ ais_membership_dispatch(cpg_handle_t handle,
- #  include <corosync/quorum.h>
- #  include <corosync/cpg.h>
-
--static gint
--compare_node_uname(gconstpointer a, gconstpointer b)
--{
--    const crm_node_t *a_node = a;
--    const crm_node_t *b_node = b;
--    return strcmp(a_node->uname?a_node->uname:"", b_node->uname?b_node->uname:"");
--}
--
--static int
--node_mcp_dispatch(const char *buffer, ssize_t length, gpointer userdata)
--{
--    xmlNode *msg = string2xml(buffer);
--
--    if (msg) {
--        xmlNode *node = NULL;
--        GListPtr nodes = NULL;
--        GListPtr iter = NULL;
--
--        crm_log_xml_trace(msg, "message");
--
--        for (node = __xml_first_child(msg); node != NULL; node = __xml_next(node)) \
                {
--            crm_node_t *peer = calloc(1, sizeof(crm_node_t));
--
--            nodes = g_list_insert_sorted(nodes, peer, compare_node_uname);
--            peer->uname = (char*)crm_element_value_copy(node, "uname");
--            peer->state = (char*)crm_element_value_copy(node, "state");
--            crm_element_value_int(node, "id", (int*)&peer->id);
--        }
--
--        for(iter = nodes; iter; iter = iter->next) {
--            crm_node_t *peer = iter->data;
--            if (command == 'l') {
--                fprintf(stdout, "%u %s\n", peer->id, peer->uname);
--
--            } else if (command == 'p') {
--                if(safe_str_eq(peer->state, CRM_NODE_MEMBER)) {
--                    fprintf(stdout, "%s ", peer->uname);
--                }
--            }
--        }
--
--        g_list_free_full(nodes, free);
--        free_xml(msg);
--
--        if (command == 'p') {
--            fprintf(stdout, "\n");
--        }
--
--        crm_exit(pcmk_ok);
--    }
--
--    return 0;
--}
--
--static void
--node_mcp_destroy(gpointer user_data)
--{
--    crm_exit(ENOTCONN);
--}
--
- static gboolean
- try_corosync(int command, enum cluster_type_e stack)
- {
-@@ -696,36 +771,7 @@ try_corosync(int command, enum cluster_type_e stack)
-     cpg_handle_t c_handle = 0;
-     quorum_handle_t q_handle = 0;
-
--    mainloop_io_t *ipc = NULL;
--    GMainLoop *amainloop = NULL;
--    const char *daemons[] = {
--            CRM_SYSTEM_CRMD,
--            "stonith-ng",
--            T_ATTRD,
--            CRM_SYSTEM_MCP,
--        };
--
--    struct ipc_client_callbacks node_callbacks = {
--        .dispatch = node_mcp_dispatch,
--        .destroy = node_mcp_destroy
--    };
--
-     switch (command) {
--        case 'R':
--            for(rc = 0; rc < DIMOF(daemons); rc++) {
--                if (tools_remove_node_cache(target_uname, daemons[rc])) {
--                    crm_err("Failed to connect to %s to remove node '%s'", \
                daemons[rc], target_uname);
--                    crm_exit(pcmk_err_generic);
--                }
--            }
--            crm_exit(pcmk_ok);
--            break;
--
--        case 'e':
--            /* Age makes no sense (yet) in an AIS cluster */
--            fprintf(stdout, "1\n");
--            crm_exit(pcmk_ok);
--
-         case 'q':
-             /* Go direct to the Quorum API */
-             rc = quorum_initialize(&q_handle, NULL, &quorum_type);
-@@ -766,21 +812,8 @@ try_corosync(int command, enum cluster_type_e stack)
-             cpg_finalize(c_handle);
-             crm_exit(pcmk_ok);
-
--        case 'l':
--        case 'p':
--            /* Go to pacemakerd */
--            amainloop = g_main_new(FALSE);
--            ipc --                mainloop_add_ipc_client(CRM_SYSTEM_MCP, \
                G_PRIORITY_DEFAULT, 0, NULL,
--                                        &node_callbacks);
--            if (ipc != NULL) {
--                /* Sending anything will get us a list of nodes */
--                xmlNode *poke = create_xml_node(NULL, "poke");
--
--                crm_ipc_send(mainloop_get_ipc_client(ipc), poke, 0, 0, NULL);
--                free_xml(poke);
--                g_main_run(amainloop);
--            }
-+        default:
-+            try_pacemaker(command, stack);
-             break;
-     }
-     return FALSE;
-@@ -963,5 +996,7 @@ main(int argc, char **argv)
-     }
- #endif
-
-+    try_pacemaker(command, try_stack);
-+
-     return (1);
- }
diff --git a/0019-Fix-corosync-Display-node-state-and-quorum-data-if-a.patch \
b/0019-Fix-corosync-Display-node-state-and-quorum-data-if-a.patch deleted file mode \
100644 index b7822e3..0000000
--- a/0019-Fix-corosync-Display-node-state-and-quorum-data-if-a.patch
+++ /dev/null
@@ -1,94 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Thu, 3 Sep 2015 12:27:59 +1000
-Subject: [PATCH] Fix: corosync: Display node state and quorum data if
- available
-
-(cherry picked from commit 4d4c92e515bbaf74917a311e19d5995b30c29430)
----
- mcp/pacemaker.c  |  7 +++++++
- tools/crm_node.c | 17 ++++++++++-------
- 2 files changed, 17 insertions(+), 7 deletions(-)
-
-diff --git a/mcp/pacemaker.c b/mcp/pacemaker.c
-index f9fc015..9c3195e 100644
---- a/mcp/pacemaker.c
-+++ b/mcp/pacemaker.c
-@@ -35,6 +35,8 @@
-
- #include <dirent.h>
- #include <ctype.h>
-+
-+gboolean pcmk_quorate = FALSE;
- gboolean fatal_error = FALSE;
- GMainLoop *mainloop = NULL;
-
-@@ -560,6 +562,10 @@ update_process_clients(crm_client_t *client)
-     crm_node_t *node = NULL;
-     xmlNode *update = create_xml_node(NULL, "nodes");
-
-+    if (is_corosync_cluster()) {
-+        crm_xml_add_int(update, "quorate", pcmk_quorate);
-+    }
-+
-     g_hash_table_iter_init(&iter, crm_peer_cache);
-     while (g_hash_table_iter_next(&iter, NULL, (gpointer *) & node)) {
-         xmlNode *xml = create_xml_node(update, "node");
-@@ -896,6 +902,7 @@ static gboolean
- mcp_quorum_callback(unsigned long long seq, gboolean quorate)
- {
-     /* Nothing to do */
-+    pcmk_quorate = quorate;
-     return TRUE;
- }
-
-diff --git a/tools/crm_node.c b/tools/crm_node.c
-index 308d4f9..9626120 100644
---- a/tools/crm_node.c
-+++ b/tools/crm_node.c
-@@ -243,8 +243,16 @@ node_mcp_dispatch(const char *buffer, ssize_t length, gpointer \
                userdata)
-         xmlNode *node = NULL;
-         GListPtr nodes = NULL;
-         GListPtr iter = NULL;
-+        const char *quorate = crm_element_value(msg, "quorate");
-
-         crm_log_xml_trace(msg, "message");
-+        if (command == 'q' && quorate != NULL) {
-+            fprintf(stdout, "%s\n", quorate);
-+            crm_exit(pcmk_ok);
-+
-+        } else if(command == 'q') {
-+            crm_exit(1);
-+        }
-
-         for (node = __xml_first_child(msg); node != NULL; node = __xml_next(node)) \
                {
-             crm_node_t *peer = calloc(1, sizeof(crm_node_t));
-@@ -258,7 +266,7 @@ node_mcp_dispatch(const char *buffer, ssize_t length, gpointer \
                userdata)
-         for(iter = nodes; iter; iter = iter->next) {
-             crm_node_t *peer = iter->data;
-             if (command == 'l') {
--                fprintf(stdout, "%u %s %s\n", peer->id, peer->uname, peer->state);
-+                fprintf(stdout, "%u %s %s\n", peer->id, peer->uname, \
                peer->state?peer->state:"");
-
-             } else if (command == 'p') {
-                 if(safe_str_eq(peer->state, CRM_NODE_MEMBER)) {
-@@ -310,12 +318,6 @@ try_pacemaker(int command, enum cluster_type_e stack)
-             fprintf(stdout, "1\n");
-             crm_exit(pcmk_ok);
-
--        case 'q':
--            /* Implement one day?
--             * Wouldn't be much for pacemakerd to track it and include in the poke \
                reply
--             */
--            return FALSE;
--
-         case 'R':
-             {
-                 int lpc = 0;
-@@ -338,6 +340,7 @@ try_pacemaker(int command, enum cluster_type_e stack)
-
-         case 'i':
-         case 'l':
-+        case 'q':
-         case 'p':
-             /* Go to pacemakerd */
-             {
diff --git a/0020-Fix-pacemakerd-Do-not-forget-about-nodes-that-leave-.patch \
b/0020-Fix-pacemakerd-Do-not-forget-about-nodes-that-leave-.patch deleted file mode \
100644 index e2da8a5..0000000
--- a/0020-Fix-pacemakerd-Do-not-forget-about-nodes-that-leave-.patch
+++ /dev/null
@@ -1,23 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Thu, 3 Sep 2015 13:27:57 +1000
-Subject: [PATCH] Fix: pacemakerd: Do not forget about nodes that leave the
- cluster
-
-(cherry picked from commit 2ac396ae6f54c9437bcf786eeccf94d4e2fdd77a)
----
- mcp/pacemaker.c | 2 ++
- 1 file changed, 2 insertions(+)
-
-diff --git a/mcp/pacemaker.c b/mcp/pacemaker.c
-index 9c3195e..88a6a1f 100644
---- a/mcp/pacemaker.c
-+++ b/mcp/pacemaker.c
-@@ -1108,6 +1108,8 @@ main(int argc, char **argv)
-     cluster.cpg.cpg_deliver_fn = mcp_cpg_deliver;
-     cluster.cpg.cpg_confchg_fn = mcp_cpg_membership;
-
-+    crm_set_autoreap(FALSE);
-+
-     if(cluster_connect_cpg(&cluster) == FALSE) {
-         crm_err("Couldn't connect to Corosync's CPG service");
-         rc = -ENOPROTOOPT;
diff --git a/0021-Fix-pacemakerd-Track-node-state-in-pacemakerd.patch \
b/0021-Fix-pacemakerd-Track-node-state-in-pacemakerd.patch deleted file mode 100644
index b2814a8..0000000
--- a/0021-Fix-pacemakerd-Track-node-state-in-pacemakerd.patch
+++ /dev/null
@@ -1,58 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Thu, 3 Sep 2015 14:29:27 +1000
-Subject: [PATCH] Fix: pacemakerd: Track node state in pacemakerd
-
-(cherry picked from commit c186f54241c49bf20b1620767933b006063d613c)
----
- mcp/pacemaker.c | 22 +++++++++++++++++++++-
- 1 file changed, 21 insertions(+), 1 deletion(-)
-
-diff --git a/mcp/pacemaker.c b/mcp/pacemaker.c
-index 88a6a1f..9f00a21 100644
---- a/mcp/pacemaker.c
-+++ b/mcp/pacemaker.c
-@@ -901,7 +901,6 @@ mcp_cpg_membership(cpg_handle_t handle,
- static gboolean
- mcp_quorum_callback(unsigned long long seq, gboolean quorate)
- {
--    /* Nothing to do */
-     pcmk_quorate = quorate;
-     return TRUE;
- }
-@@ -909,8 +908,23 @@ mcp_quorum_callback(unsigned long long seq, gboolean quorate)
- static void
- mcp_quorum_destroy(gpointer user_data)
- {
-+    crm_info("connection lost");
-+}
-+
-+#if SUPPORT_CMAN
-+static gboolean
-+mcp_cman_dispatch(unsigned long long seq, gboolean quorate)
-+{
-+    pcmk_quorate = quorate;
-+    return TRUE;
-+}
-+
-+static void
-+mcp_cman_destroy(gpointer user_data)
-+{
-     crm_info("connection closed");
- }
-+#endif
-
- int
- main(int argc, char **argv)
-@@ -1122,6 +1136,12 @@ main(int argc, char **argv)
-         }
-     }
-
-+#if SUPPORT_CMAN
-+    if (rc == pcmk_ok && is_cman_cluster()) {
-+        init_cman_connection(mcp_cman_dispatch, mcp_cman_destroy);
-+    }
-+#endif
-+
-     if(rc == pcmk_ok) {
-         local_name = get_local_node_name();
-         update_node_processes(local_nodeid, local_name, get_process_list());
diff --git a/0022-Fix-PE-Resolve-memory-leak.patch \
b/0022-Fix-PE-Resolve-memory-leak.patch deleted file mode 100644
index e7cd5b1..0000000
--- a/0022-Fix-PE-Resolve-memory-leak.patch
+++ /dev/null
@@ -1,27 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Tue, 8 Sep 2015 12:02:54 +1000
-Subject: [PATCH] Fix: PE: Resolve memory leak
-
-(cherry picked from commit 4f48a79fd19be0e614716f0900e31985d4714ace)
----
- lib/pengine/unpack.c | 4 ++++
- 1 file changed, 4 insertions(+)
-
-diff --git a/lib/pengine/unpack.c b/lib/pengine/unpack.c
-index 156a192..c4f3134 100644
---- a/lib/pengine/unpack.c
-+++ b/lib/pengine/unpack.c
-@@ -276,9 +276,13 @@ destroy_digest_cache(gpointer ptr)
-     op_digest_cache_t *data = ptr;
-
-     free_xml(data->params_all);
-+    free_xml(data->params_secure);
-     free_xml(data->params_restart);
-+
-     free(data->digest_all_calc);
-     free(data->digest_restart_calc);
-+    free(data->digest_secure_calc);
-+
-     free(data);
- }
-
diff --git a/0023-Fix-cman-Purge-all-node-caches-for-crm_node-R.patch \
b/0023-Fix-cman-Purge-all-node-caches-for-crm_node-R.patch deleted file mode 100644
index 5ff7c08..0000000
--- a/0023-Fix-cman-Purge-all-node-caches-for-crm_node-R.patch
+++ /dev/null
@@ -1,24 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Tue, 8 Sep 2015 12:03:56 +1000
-Subject: [PATCH] Fix: cman: Purge all node caches for crm_node -R
-
-(cherry picked from commit c445e135b6d52b1a5f3cfdacfa54a63b313c00d2)
----
- tools/crm_node.c | 4 +---
- 1 file changed, 1 insertion(+), 3 deletions(-)
-
-diff --git a/tools/crm_node.c b/tools/crm_node.c
-index 9626120..48ee7c4 100644
---- a/tools/crm_node.c
-+++ b/tools/crm_node.c
-@@ -607,9 +607,7 @@ try_cman(int command, enum cluster_type_e stack)
-
-     switch (command) {
-         case 'R':
--            if (tools_remove_node_cache(target_uname, CRM_SYSTEM_CRMD)) {
--                crm_err("Failed to connect to "CRM_SYSTEM_CRMD" to remove node \
                '%s'", target_uname);
--            }
-+            try_pacemaker(command, stack);
-             break;
-
-         case 'e':
diff --git a/0024-Refactor-membership-Safely-autoreap-nodes-without-co.patch \
b/0024-Refactor-membership-Safely-autoreap-nodes-without-co.patch deleted file mode \
100644 index 35617cc..0000000
--- a/0024-Refactor-membership-Safely-autoreap-nodes-without-co.patch
+++ /dev/null
@@ -1,92 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Tue, 8 Sep 2015 12:05:04 +1000
-Subject: [PATCH] Refactor: membership: Safely autoreap nodes without code
- duplication
-
-(cherry picked from commit acd660a1bdf40ada599041cb14d2128632d2e7a5)
----
- lib/cluster/membership.c | 43 +++++++++++++++++++++----------------------
- 1 file changed, 21 insertions(+), 22 deletions(-)
-
-diff --git a/lib/cluster/membership.c b/lib/cluster/membership.c
-index b7958eb..3081e54 100644
---- a/lib/cluster/membership.c
-+++ b/lib/cluster/membership.c
-@@ -795,8 +795,8 @@ crm_update_peer_expected(const char *source, crm_node_t * node, \
                const char *expe
-  *       called within a cache iteration if reaping is possible,
-  *       otherwise reaping could invalidate the iterator.
-  */
--crm_node_t *
--crm_update_peer_state(const char *source, crm_node_t * node, const char *state, int \
                membership)
-+static crm_node_t *
-+crm_update_peer_state_iter(const char *source, crm_node_t * node, const char \
                *state, int membership, GHashTableIter *iter)
- {
-     gboolean is_member;
-
-@@ -822,13 +822,19 @@ crm_update_peer_state(const char *source, crm_node_t * node, \
                const char *state,
-         free(last);
-
-         if (!is_member && crm_autoreap) {
--            if (status_type == crm_status_rstate) {
-+            if(iter) {
-+                crm_notice("Purged 1 peer with id=%u and/or uname=%s from the \
                membership cache", node->id, node->uname);
-+                g_hash_table_iter_remove(iter);
-+
-+            } else if (status_type == crm_status_rstate) {
-                 crm_remote_peer_cache_remove(node->uname);
-+
-             } else {
-                 reap_crm_member(node->id, node->uname);
-             }
-             node = NULL;
-         }
-+
-     } else {
-         crm_trace("%s: Node %s[%u] - state is unchanged (%s)", source, node->uname, \
                node->id,
-                   state);
-@@ -836,6 +842,12 @@ crm_update_peer_state(const char *source, crm_node_t * node, \
                const char *state,
-     return node;
- }
-
-+crm_node_t *
-+crm_update_peer_state(const char *source, crm_node_t * node, const char *state, int \
                membership)
-+{
-+    return crm_update_peer_state_iter(source, node, state, membership, NULL);
-+}
-+
- /*!
-  * \internal
-  * \brief Reap all nodes from cache whose membership information does not match
-@@ -853,26 +865,13 @@ crm_reap_unseen_nodes(uint64_t membership)
-     while (g_hash_table_iter_next(&iter, NULL, (gpointer *)&node)) {
-         if (node->last_seen != membership) {
-             if (node->state) {
--                /* crm_update_peer_state() cannot be called here, because that
--                 * might modify the peer cache, invalidating our iterator
-+                /*
-+                 * Calling crm_update_peer_state_iter() allows us to
-+                 * remove the node from crm_peer_cache without
-+                 * invalidating our iterator
-                  */
--                if (safe_str_eq(node->state, CRM_NODE_LOST)) {
--                    crm_trace("Node %s[%u] - state is unchanged (%s)",
--                              node->uname, node->id, CRM_NODE_LOST);
--                } else {
--                    char *last = node->state;
--
--                    node->state = strdup(CRM_NODE_LOST);
--                    crm_notice("Node %s[%u] - state is now %s (was %s)",
--                               node->uname, node->id, CRM_NODE_LOST, last);
--                    if (crm_status_callback) {
--                        crm_status_callback(crm_status_nstate, node, last);
--                    }
--                    if (crm_autoreap) {
--                        g_hash_table_iter_remove(&iter);
--                    }
--                    free(last);
--                }
-+                crm_update_peer_state_iter(__FUNCTION__, node, CRM_NODE_LOST, \
                membership, &iter);
-+
-             } else {
-                 crm_info("State of node %s[%u] is still unknown",
-                          node->uname, node->id);
diff --git a/0025-Fix-crmd-Prevent-segfault-by-correctly-detecting-whe.patch \
b/0025-Fix-crmd-Prevent-segfault-by-correctly-detecting-whe.patch deleted file mode \
100644 index a1797e9..0000000
--- a/0025-Fix-crmd-Prevent-segfault-by-correctly-detecting-whe.patch
+++ /dev/null
@@ -1,23 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Wed, 9 Sep 2015 14:46:49 +1000
-Subject: [PATCH] Fix: crmd: Prevent segfault by correctly detecting when
- notifications are not required
-
-(cherry picked from commit 5eb9f93ef666c75e5f32827a92b0a57ada063803)
----
- crmd/notify.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/crmd/notify.c b/crmd/notify.c
-index ca2be0f..179af18 100644
---- a/crmd/notify.c
-+++ b/crmd/notify.c
-@@ -141,7 +141,7 @@ crmd_notify_fencing_op(stonith_event_t * e)
- {
-     char *desc = NULL;
-
--    if(notify_script) {
-+    if(notify_script == NULL) {
-         return;
-     }
-
diff --git a/0026-Fix-crmd-don-t-add-node-ID-to-proxied-remote-node-re.patch \
b/0026-Fix-crmd-don-t-add-node-ID-to-proxied-remote-node-re.patch deleted file mode \
100644 index ba29678..0000000
--- a/0026-Fix-crmd-don-t-add-node-ID-to-proxied-remote-node-re.patch
+++ /dev/null
@@ -1,29 +0,0 @@
-From: Ken Gaillot <kgaillot@redhat.com>
-Date: Thu, 27 Aug 2015 11:00:02 -0500
-Subject: [PATCH] Fix: crmd: don't add node ID to proxied remote node requests
- for attrd
-
-446a1005 incorrectly set F_ATTRD_HOST_ID for proxied remote node requests to
-attrd. Since attrd only uses F_ATTRD_HOST_ID to associate a cluster node name
-with an ID, it doesn't ever need to be set for remote nodes.
-
-Additionally, that revision used the proxying cluster node's node ID, which can
-lead to node ID conflicts in attrd.
-
-(cherry picked from commit 6af6da534646dbadf3d8d1d63d0edb2844c72073)
----
- crmd/lrm_state.c | 1 -
- 1 file changed, 1 deletion(-)
-
-diff --git a/crmd/lrm_state.c b/crmd/lrm_state.c
-index c03fa0b..bea1027 100644
---- a/crmd/lrm_state.c
-+++ b/crmd/lrm_state.c
-@@ -540,7 +540,6 @@ remote_proxy_cb(lrmd_t *lrmd, void *userdata, xmlNode *msg)
-             if (safe_str_eq(type, T_ATTRD)
-                 && crm_element_value(request, F_ATTRD_HOST) == NULL) {
-                 crm_xml_add(request, F_ATTRD_HOST, proxy->node_name);
--                crm_xml_add_int(request, F_ATTRD_HOST_ID, get_local_nodeid(0));
-             }
-
-             rc = crm_ipc_send(proxy->ipc, request, flags, 5000, NULL);
diff --git a/0027-Fix-pacemaker_remote-memory-leak-in-ipc_proxy_dispat.patch \
b/0027-Fix-pacemaker_remote-memory-leak-in-ipc_proxy_dispat.patch deleted file mode \
100644 index 9dad48e..0000000
--- a/0027-Fix-pacemaker_remote-memory-leak-in-ipc_proxy_dispat.patch
+++ /dev/null
@@ -1,35 +0,0 @@
-From: Ken Gaillot <kgaillot@redhat.com>
-Date: Mon, 14 Sep 2015 15:00:13 -0500
-Subject: [PATCH] Fix: pacemaker_remote: memory leak in ipc_proxy_dispatch()
-
-Detected via routine valgrind testing
-
-(cherry picked from commit 3bb439d1554cb5567b886c52107bd3bb6f27b696)
----
- lrmd/ipc_proxy.c | 5 +++--
- 1 file changed, 3 insertions(+), 2 deletions(-)
-
-diff --git a/lrmd/ipc_proxy.c b/lrmd/ipc_proxy.c
-index 9427393..2a5ad78 100644
---- a/lrmd/ipc_proxy.c
-+++ b/lrmd/ipc_proxy.c
-@@ -223,9 +223,9 @@ ipc_proxy_dispatch(qb_ipcs_connection_t * c, void *data, size_t \
                size)
-     }
-
-     CRM_CHECK(client != NULL, crm_err("Invalid client");
--              return FALSE);
-+              free_xml(request); return FALSE);
-     CRM_CHECK(client->id != NULL, crm_err("Invalid client: %p", client);
--              return FALSE);
-+              free_xml(request); return FALSE);
-
-     /* this ensures that synced request/responses happen over the event channel
-      * in the crmd, allowing the crmd to process the messages async */
-@@ -241,6 +241,7 @@ ipc_proxy_dispatch(qb_ipcs_connection_t * c, void *data, size_t \
                size)
-     crm_xml_add_int(msg, F_LRMD_IPC_MSG_FLAGS, flags);
-     add_message_xml(msg, F_LRMD_IPC_MSG, request);
-     lrmd_server_send_notify(ipc_proxy, msg);
-+    free_xml(request);
-     free_xml(msg);
-
-     return 0;
diff --git a/0028-Log-The-package-version-is-more-informative.patch \
b/0028-Log-The-package-version-is-more-informative.patch deleted file mode 100644
index 543d9ab..0000000
--- a/0028-Log-The-package-version-is-more-informative.patch
+++ /dev/null
@@ -1,115 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Wed, 16 Sep 2015 09:14:39 +1000
-Subject: [PATCH] Log: The package version is more informative
-
-(cherry picked from commit 2b4d195e9e94777fc1953832fcce3637ffa2f449)
----
- crmd/cib.c         | 2 +-
- crmd/election.c    | 2 +-
- crmd/main.c        | 5 ++---
- lib/ais/plugin.c   | 2 +-
- lib/common/utils.c | 4 ++--
- mcp/pacemaker.c    | 4 ++--
- 6 files changed, 9 insertions(+), 10 deletions(-)
-
-diff --git a/crmd/cib.c b/crmd/cib.c
-index 7ec5eda..41e9efb 100644
---- a/crmd/cib.c
-+++ b/crmd/cib.c
-@@ -113,7 +113,7 @@ revision_check_callback(xmlNode * msg, int call_id, int rc, \
                xmlNode * output, vo
-     cmp = compare_version(revision, CRM_FEATURE_SET);
-
-     if (cmp > 0) {
--        crm_err("This build (%s) does not support the current resource \
                configuration", VERSION);
-+        crm_err("This build (%s) does not support the current resource \
                configuration", PACEMAKER_VERSION);
-         crm_err("We can only support up to CRM feature set %s (current=%s)",
-                 CRM_FEATURE_SET, revision);
-         crm_err("Shutting down the CRM");
-diff --git a/crmd/election.c b/crmd/election.c
-index b542a66..adab4e3 100644
---- a/crmd/election.c
-+++ b/crmd/election.c
-@@ -215,7 +215,7 @@ do_dc_takeover(long long action,
-     }
-
-     update_attr_delegate(fsa_cib_conn, cib_none, XML_CIB_TAG_CRMCONFIG, NULL, NULL, \
                NULL, NULL,
--                         "dc-version", VERSION "-" BUILD_VERSION, FALSE, NULL, \
                NULL);
-+                         "dc-version", PACEMAKER_VERSION "-" BUILD_VERSION, FALSE, \
                NULL, NULL);
-
-     update_attr_delegate(fsa_cib_conn, cib_none, XML_CIB_TAG_CRMCONFIG, NULL, NULL, \
                NULL, NULL,
-                          "cluster-infrastructure", cluster_type, FALSE, NULL, \
                NULL);
-diff --git a/crmd/main.c b/crmd/main.c
-index e9a69b4..75ed91c 100644
---- a/crmd/main.c
-+++ b/crmd/main.c
-@@ -89,13 +89,12 @@ main(int argc, char **argv)
-         crmd_metadata();
-         return 0;
-     } else if (argc - optind == 1 && safe_str_eq("version", argv[optind])) {
--        fprintf(stdout, "CRM Version: ");
--        fprintf(stdout, "%s (%s)\n", VERSION, BUILD_VERSION);
-+        fprintf(stdout, "CRM Version: %s (%s)\n", PACEMAKER_VERSION, \
                BUILD_VERSION);
-         return 0;
-     }
-
-     crm_log_init(NULL, LOG_INFO, TRUE, FALSE, argc, argv, FALSE);
--    crm_notice("CRM Git Version: %s\n", BUILD_VERSION);
-+    crm_notice("CRM Git Version: %s (%s)\n", PACEMAKER_VERSION, BUILD_VERSION);
-
-     if (optind > argc) {
-         ++argerr;
-diff --git a/lib/ais/plugin.c b/lib/ais/plugin.c
-index ab534fa..cf2a131 100644
---- a/lib/ais/plugin.c
-+++ b/lib/ais/plugin.c
-@@ -201,7 +201,7 @@ static struct corosync_exec_handler pcmk_exec_service[] = {
-  */
- /* *INDENT-OFF* */
- struct corosync_service_engine pcmk_service_handler = {
--    .name			= (char *)"Pacemaker Cluster Manager "PACKAGE_VERSION,
-+    .name			= (char *)"Pacemaker Cluster Manager "PACEMAKER_VERSION,
-     .id				= PCMK_SERVICE_ID,
-     .private_data_size		= 0,
-     .flow_control		= COROSYNC_LIB_FLOW_CONTROL_NOT_REQUIRED,
-diff --git a/lib/common/utils.c b/lib/common/utils.c
-index 628cf2f..2364f5c 100644
---- a/lib/common/utils.c
-+++ b/lib/common/utils.c
-@@ -1603,13 +1603,13 @@ crm_help(char cmd, int exit_code)
-     FILE *stream = (exit_code ? stderr : stdout);
-
-     if (cmd == 'v' || cmd == '$') {
--        fprintf(stream, "Pacemaker %s\n", VERSION);
-+        fprintf(stream, "Pacemaker %s\n", PACEMAKER_VERSION);
-         fprintf(stream, "Written by Andrew Beekhof\n");
-         goto out;
-     }
-
-     if (cmd == '!') {
--        fprintf(stream, "Pacemaker %s (Build: %s): %s\n", VERSION, BUILD_VERSION, \
                CRM_FEATURES);
-+        fprintf(stream, "Pacemaker %s (Build: %s): %s\n", PACEMAKER_VERSION, \
                BUILD_VERSION, CRM_FEATURES);
-         goto out;
-     }
-
-diff --git a/mcp/pacemaker.c b/mcp/pacemaker.c
-index 9f00a21..910d154 100644
---- a/mcp/pacemaker.c
-+++ b/mcp/pacemaker.c
-@@ -972,7 +972,7 @@ main(int argc, char **argv)
-                 shutdown = TRUE;
-                 break;
-             case 'F':
--                printf("Pacemaker %s (Build: %s)\n Supporting v%s: %s\n", VERSION, \
                BUILD_VERSION,
-+                printf("Pacemaker %s (Build: %s)\n Supporting v%s: %s\n", \
                PACEMAKER_VERSION, BUILD_VERSION,
-                        CRM_FEATURE_SET, CRM_FEATURES);
-                 crm_exit(pcmk_ok);
-             default:
-@@ -1039,7 +1039,7 @@ main(int argc, char **argv)
-         crm_exit(ENODATA);
-     }
-
--    crm_notice("Starting Pacemaker %s (Build: %s): %s", VERSION, BUILD_VERSION, \
                CRM_FEATURES);
-+    crm_notice("Starting Pacemaker %s (Build: %s): %s", PACEMAKER_VERSION, \
                BUILD_VERSION, CRM_FEATURES);
-     mainloop = g_main_new(FALSE);
-     sysrq_init();
-
diff --git a/0029-Fix-crm_resource-Allow-the-resource-configuration-to.patch \
b/0029-Fix-crm_resource-Allow-the-resource-configuration-to.patch deleted file mode \
100644 index 942b464..0000000
--- a/0029-Fix-crm_resource-Allow-the-resource-configuration-to.patch
+++ /dev/null
@@ -1,127 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Thu, 17 Sep 2015 09:46:38 +1000
-Subject: [PATCH] Fix: crm_resource: Allow the resource configuration to be
- modified for --force-{check,start,..} calls
-
-(cherry picked from commit 1206f735a8ddb33c77152c736828e823e7755c34)
----
- tools/crm_resource.c         | 36 +++++++++++++++++++++++++++++++-----
- tools/crm_resource.h         |  2 +-
- tools/crm_resource_runtime.c | 14 +++++++++++++-
- 3 files changed, 45 insertions(+), 7 deletions(-)
-
-diff --git a/tools/crm_resource.c b/tools/crm_resource.c
-index 156bbea..2a94362 100644
---- a/tools/crm_resource.c
-+++ b/tools/crm_resource.c
-@@ -247,6 +247,7 @@ main(int argc, char **argv)
-     const char *prop_set = NULL;
-     const char *rsc_long_cmd = NULL;
-     const char *longname = NULL;
-+    GHashTable *override_params = NULL;
-
-     char *xml_file = NULL;
-     crm_ipc_t *crmd_channel = NULL;
-@@ -503,11 +504,35 @@ main(int argc, char **argv)
-         }
-     }
-
--    if (optind < argc && argv[optind] != NULL) {
-+    if (optind < argc
-+        && argv[optind] != NULL
-+        && rsc_cmd == 0
-+        && rsc_long_cmd) {
-+
-+        override_params = g_hash_table_new_full(crm_str_hash, g_str_equal, \
                g_hash_destroy_str, g_hash_destroy_str);
-+        while (optind < argc && argv[optind] != NULL) {
-+            char *name = calloc(1, strlen(argv[optind]));
-+            char *value = calloc(1, strlen(argv[optind]));
-+            int rc = sscanf(argv[optind], "%[^=]=%s", name, value);
-+
-+            if(rc == 2) {
-+                g_hash_table_replace(override_params, name, value);
-+
-+            } else {
-+                CMD_ERR("Error parsing '%s' as a name=value pair for --%s", \
                argv[optind], rsc_long_cmd);
-+                free(value);
-+                free(name);
-+                argerr++;
-+            }
-+            optind++;
-+        }
-+
-+    } else if (optind < argc && argv[optind] != NULL && rsc_cmd == 0) {
-         CMD_ERR("non-option ARGV-elements: ");
-         while (optind < argc && argv[optind] != NULL) {
--            CMD_ERR("%s ", argv[optind++]);
--            ++argerr;
-+            CMD_ERR("[%d of %d] %s ", optind, argc, argv[optind]);
-+            optind++;
-+            argerr++;
-         }
-     }
-
-@@ -516,7 +541,8 @@ main(int argc, char **argv)
-     }
-
-     if (argerr) {
--        crm_help('?', EX_USAGE);
-+        CMD_ERR("Invalid option(s) supplied, use --help for valid usage");
-+        return crm_exit(EX_USAGE);
-     }
-
-     our_pid = calloc(1, 11);
-@@ -631,7 +657,7 @@ main(int argc, char **argv)
-         rc = wait_till_stable(timeout_ms, cib_conn);
-
-     } else if (rsc_cmd == 0 && rsc_long_cmd) { /* force-(stop|start|check) */
--        rc = cli_resource_execute(rsc_id, rsc_long_cmd, cib_conn, &data_set);
-+        rc = cli_resource_execute(rsc_id, rsc_long_cmd, override_params, cib_conn, \
                &data_set);
-
-     } else if (rsc_cmd == 'A' || rsc_cmd == 'a') {
-         GListPtr lpc = NULL;
-diff --git a/tools/crm_resource.h b/tools/crm_resource.h
-index 5a206e0..d4c3b05 100644
---- a/tools/crm_resource.h
-+++ b/tools/crm_resource.h
-@@ -74,7 +74,7 @@ int cli_resource_search(const char *rsc, pe_working_set_t * \
                data_set);
- int cli_resource_delete(cib_t *cib_conn, crm_ipc_t * crmd_channel, const char \
                *host_uname, resource_t * rsc, pe_working_set_t * data_set);
- int cli_resource_restart(resource_t * rsc, const char *host, int timeout_ms, cib_t \
                * cib);
- int cli_resource_move(const char *rsc_id, const char *host_name, cib_t * cib, \
                pe_working_set_t *data_set);
--int cli_resource_execute(const char *rsc_id, const char *rsc_action, cib_t * cib, \
                pe_working_set_t *data_set);
-+int cli_resource_execute(const char *rsc_id, const char *rsc_action, GHashTable \
                *override_hash, cib_t * cib, pe_working_set_t *data_set);
-
- int cli_resource_update_attribute(const char *rsc_id, const char *attr_set, const \
                char *attr_id,
-                                   const char *attr_name, const char *attr_value, \
                bool recursive,
-diff --git a/tools/crm_resource_runtime.c b/tools/crm_resource_runtime.c
-index b9427bc..ce9db01 100644
---- a/tools/crm_resource_runtime.c
-+++ b/tools/crm_resource_runtime.c
-@@ -1297,7 +1297,7 @@ wait_till_stable(int timeout_ms, cib_t * cib)
- }
-
- int
--cli_resource_execute(const char *rsc_id, const char *rsc_action, cib_t * cib, \
                pe_working_set_t *data_set)
-+cli_resource_execute(const char *rsc_id, const char *rsc_action, GHashTable \
                *override_hash, cib_t * cib, pe_working_set_t *data_set)
- {
-     int rc = pcmk_ok;
-     svc_action_t *op = NULL;
-@@ -1360,6 +1360,18 @@ cli_resource_execute(const char *rsc_id, const char \
                *rsc_action, cib_t * cib, pe
-         setenv("OCF_TRACE_RA", "1", 1);
-     }
-
-+    if(op && override_hash) {
-+        GHashTableIter iter;
-+        char *name = NULL;
-+        char *value = NULL;
-+
-+        g_hash_table_iter_init(&iter, override_hash);
-+        while (g_hash_table_iter_next(&iter, (gpointer *) & name, (gpointer *) & \
                value)) {
-+            printf("Overriding the cluser configuration for '%s' with '%s' = \
                '%s'\n", rsc->id, name, value);
-+            g_hash_table_replace(op->params, strdup(name), strdup(value));
-+        }
-+    }
-+
-     if(op == NULL) {
-         /* Re-run but with stderr enabled so we can display a sane error message */
-         crm_enable_stderr(TRUE);
diff --git a/0030-Log-lrmd-Improved-logging-when-no-pacemaker-remote-a.patch \
b/0030-Log-lrmd-Improved-logging-when-no-pacemaker-remote-a.patch deleted file mode \
100644 index 6bff962..0000000
--- a/0030-Log-lrmd-Improved-logging-when-no-pacemaker-remote-a.patch
+++ /dev/null
@@ -1,34 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Thu, 17 Sep 2015 14:43:15 +1000
-Subject: [PATCH] Log: lrmd: Improved logging when no pacemaker remote authkey
- is available
-
-(cherry picked from commit 20c2178f076ff32fdf9ba9a467c193b8dac2f9e5)
----
- lib/lrmd/lrmd_client.c | 8 ++++++--
- 1 file changed, 6 insertions(+), 2 deletions(-)
-
-diff --git a/lib/lrmd/lrmd_client.c b/lib/lrmd/lrmd_client.c
-index 42bdf2b..1f1ffde 100644
---- a/lib/lrmd/lrmd_client.c
-+++ b/lib/lrmd/lrmd_client.c
-@@ -1061,13 +1061,17 @@ lrmd_tls_set_key(gnutls_datum_t * key)
-     if (set_key(key, specific_location) == 0) {
-         crm_debug("Using custom authkey location %s", specific_location);
-         return 0;
-+
-+    } else {
-+        crm_err("No lrmd remote key found at %s, trying default locations", \
                specific_location);
-     }
-
--    if (set_key(key, DEFAULT_REMOTE_KEY_LOCATION)) {
-+    if (set_key(key, DEFAULT_REMOTE_KEY_LOCATION) != 0) {
-         rc = set_key(key, ALT_REMOTE_KEY_LOCATION);
-     }
-+
-     if (rc) {
--        crm_err("No lrmd remote key found");
-+        crm_err("No lrmd remote key found at %s", DEFAULT_REMOTE_KEY_LOCATION);
-         return -1;
-     }
-
diff --git a/0031-Fix-liblrmd-don-t-print-error-if-remote-key-environm.patch \
b/0031-Fix-liblrmd-don-t-print-error-if-remote-key-environm.patch deleted file mode \
100644 index 0210482..0000000
--- a/0031-Fix-liblrmd-don-t-print-error-if-remote-key-environm.patch
+++ /dev/null
@@ -1,38 +0,0 @@
-From: Ken Gaillot <kgaillot@redhat.com>
-Date: Wed, 23 Sep 2015 10:45:39 -0500
-Subject: [PATCH] Fix: liblrmd: don't print error if remote key environment
- variable unset
-
-20c2178 added error logging if the remote key was unable to be read,
-however it would also log an error in the usual case where the
-environment variable was simply unset.
-
-(cherry picked from commit dec3349f1252e2c2c18ed110b8cc4a2b2212b613)
----
- lib/lrmd/lrmd_client.c | 6 +++---
- 1 file changed, 3 insertions(+), 3 deletions(-)
-
-diff --git a/lib/lrmd/lrmd_client.c b/lib/lrmd/lrmd_client.c
-index 1f1ffde..f365e59 100644
---- a/lib/lrmd/lrmd_client.c
-+++ b/lib/lrmd/lrmd_client.c
-@@ -1062,8 +1062,8 @@ lrmd_tls_set_key(gnutls_datum_t * key)
-         crm_debug("Using custom authkey location %s", specific_location);
-         return 0;
-
--    } else {
--        crm_err("No lrmd remote key found at %s, trying default locations", \
                specific_location);
-+    } else if (specific_location) {
-+        crm_err("No valid lrmd remote key found at %s, trying default location", \
                specific_location);
-     }
-
-     if (set_key(key, DEFAULT_REMOTE_KEY_LOCATION) != 0) {
-@@ -1071,7 +1071,7 @@ lrmd_tls_set_key(gnutls_datum_t * key)
-     }
-
-     if (rc) {
--        crm_err("No lrmd remote key found at %s", DEFAULT_REMOTE_KEY_LOCATION);
-+        crm_err("No valid lrmd remote key found at %s", \
                DEFAULT_REMOTE_KEY_LOCATION);
-         return -1;
-     }
-
diff --git a/0032-Fix-Tools-Repair-the-logging-of-interesting-command-.patch \
b/0032-Fix-Tools-Repair-the-logging-of-interesting-command-.patch deleted file mode \
100644 index fda67b2..0000000
--- a/0032-Fix-Tools-Repair-the-logging-of-interesting-command-.patch
+++ /dev/null
@@ -1,182 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Mon, 28 Sep 2015 14:54:28 +1000
-Subject: [PATCH] Fix: Tools: Repair the logging of 'interesting' command-lines
-
-(cherry picked from commit b7d6608d8b33b4e9580e04f25446176bac832fb7)
----
- tools/attrd_updater.c |  1 +
- tools/cibadmin.c      |  8 ++++++--
- tools/crm_attribute.c |  6 +++++-
- tools/crm_resource.c  | 30 +++++++++++++++++++++++-------
- 4 files changed, 35 insertions(+), 10 deletions(-)
-
-diff --git a/tools/attrd_updater.c b/tools/attrd_updater.c
-index 878dab5..11462ee 100644
---- a/tools/attrd_updater.c
-+++ b/tools/attrd_updater.c
-@@ -150,6 +150,7 @@ main(int argc, char **argv)
-             case 'v':
-                 command = flag;
-                 attr_value = optarg;
-+                crm_log_args(argc, argv); /* Too much? */
-                 break;
-             default:
-                 ++argerr;
-diff --git a/tools/cibadmin.c b/tools/cibadmin.c
-index 6b90536..c16d3c7 100644
---- a/tools/cibadmin.c
-+++ b/tools/cibadmin.c
-@@ -213,7 +213,7 @@ main(int argc, char **argv)
-     int option_index = 0;
-
-     crm_xml_init(); /* Sets buffer allocation strategy */
--    crm_log_preinit(NULL, argc, argv);
-+    crm_log_cli_init("cibadmin");
-     crm_set_options(NULL, "command [options] [data]", long_options,
-                     "Provides direct access to the cluster configuration."
-                     "\n\nAllows the configuration, or sections of it, to be \
                queried, modified, replaced and deleted."
-@@ -286,6 +286,7 @@ main(int argc, char **argv)
-                 break;
-             case 'B':
-                 cib_action = CIB_OP_BUMP;
-+                crm_log_args(argc, argv);
-                 break;
-             case 'V':
-                 command_options = command_options | cib_verbose;
-@@ -303,13 +304,16 @@ main(int argc, char **argv)
-             case 'X':
-                 crm_trace("Option %c => %s", flag, optarg);
-                 admin_input_xml = optarg;
-+                crm_log_args(argc, argv);
-                 break;
-             case 'x':
-                 crm_trace("Option %c => %s", flag, optarg);
-                 admin_input_file = optarg;
-+                crm_log_args(argc, argv);
-                 break;
-             case 'p':
-                 admin_input_stdin = TRUE;
-+                crm_log_args(argc, argv);
-                 break;
-             case 'N':
-             case 'h':
-@@ -334,6 +338,7 @@ main(int argc, char **argv)
-             case 'f':
-                 force_flag = TRUE;
-                 command_options |= cib_quorum_override;
-+                crm_log_args(argc, argv);
-                 break;
-             case 'a':
-                 output = createEmptyCib(1);
-@@ -355,7 +360,6 @@ main(int argc, char **argv)
-         quiet = FALSE;
-     }
-
--    crm_log_init(NULL, LOG_CRIT, FALSE, FALSE, argc, argv, quiet);
-     while (bump_log_num > 0) {
-         crm_bump_log_level(argc, argv);
-         bump_log_num--;
-diff --git a/tools/crm_attribute.c b/tools/crm_attribute.c
-index c37b096..fc2f7c7 100644
---- a/tools/crm_attribute.c
-+++ b/tools/crm_attribute.c
-@@ -146,11 +146,15 @@ main(int argc, char **argv)
-             case '?':
-                 crm_help(flag, EX_OK);
-                 break;
--            case 'D':
-             case 'G':
-+                command = flag;
-+                attr_value = optarg;
-+                break;
-+            case 'D':
-             case 'v':
-                 command = flag;
-                 attr_value = optarg;
-+                crm_log_args(argc, argv);
-                 break;
-             case 'q':
-             case 'Q':
-diff --git a/tools/crm_resource.c b/tools/crm_resource.c
-index 2a94362..1b2976b 100644
---- a/tools/crm_resource.c
-+++ b/tools/crm_resource.c
-@@ -304,6 +304,7 @@ main(int argc, char **argv)
-                     || safe_str_eq("force-check",   longname)) {
-                     rsc_cmd = flag;
-                     rsc_long_cmd = longname;
-+                    crm_log_args(argc, argv);
-
-                 } else if (safe_str_eq("list-ocf-providers", longname)
-                            || safe_str_eq("list-ocf-alternatives", longname)
-@@ -433,6 +434,7 @@ main(int argc, char **argv)
-                 break;
-             case 'f':
-                 do_force = TRUE;
-+                crm_log_args(argc, argv);
-                 break;
-             case 'i':
-                 prop_id = optarg;
-@@ -452,41 +454,55 @@ main(int argc, char **argv)
-             case 'T':
-                 timeout_ms = crm_get_msec(optarg);
-                 break;
-+
-             case 'C':
-             case 'R':
-             case 'P':
--                rsc_cmd = 'C';
-+                crm_log_args(argc, argv);
-                 require_resource = FALSE;
-                 require_crmd = TRUE;
-+                rsc_cmd = 'C';
-                 break;
-+
-             case 'F':
--                rsc_cmd = flag;
-+                crm_log_args(argc, argv);
-                 require_crmd = TRUE;
-+                rsc_cmd = flag;
-+                break;
-+
-+            case 'U':
-+            case 'B':
-+            case 'M':
-+            case 'D':
-+                crm_log_args(argc, argv);
-+                rsc_cmd = flag;
-                 break;
-+
-             case 'L':
-             case 'c':
-             case 'l':
-             case 'q':
-             case 'w':
--            case 'D':
-             case 'W':
--            case 'M':
--            case 'U':
--            case 'B':
-             case 'O':
-             case 'o':
-             case 'A':
-             case 'a':
-                 rsc_cmd = flag;
-                 break;
-+
-             case 'j':
-                 print_pending = TRUE;
-                 break;
-             case 'p':
--            case 'g':
-             case 'd':
-             case 'S':
-+                crm_log_args(argc, argv);
-+                prop_name = optarg;
-+                rsc_cmd = flag;
-+                break;
-             case 'G':
-+            case 'g':
-                 prop_name = optarg;
-                 rsc_cmd = flag;
-                 break;
diff --git a/0033-Feature-Tools-Do-not-send-command-lines-to-syslog.patch \
b/0033-Feature-Tools-Do-not-send-command-lines-to-syslog.patch deleted file mode \
100644 index c01d782..0000000
--- a/0033-Feature-Tools-Do-not-send-command-lines-to-syslog.patch
+++ /dev/null
@@ -1,46 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Mon, 28 Sep 2015 15:02:10 +1000
-Subject: [PATCH] Feature: Tools: Do not send command lines to syslog
-
-(cherry picked from commit 8dae6838312c6a60c2e4b7ffa73a100fd5d0dce3)
----
- lib/common/logging.c | 8 --------
- 1 file changed, 8 deletions(-)
-
-diff --git a/lib/common/logging.c b/lib/common/logging.c
-index b18b841..6879023 100644
---- a/lib/common/logging.c
-+++ b/lib/common/logging.c
-@@ -928,24 +928,17 @@ crm_log_args(int argc, char **argv)
- {
-     int lpc = 0;
-     int len = 0;
--    int restore = FALSE;
-     int existing_len = 0;
-     int line = __LINE__;
-     static int logged = 0;
-
-     char *arg_string = NULL;
--    struct qb_log_callsite *args_cs --        qb_log_callsite_get(__func__, \
                __FILE__, ARGS_FMT, LOG_NOTICE, line, 0);
-
-     if (argc == 0 || argv == NULL || logged) {
-         return;
-     }
-
-     logged = 1;
--    qb_bit_set(args_cs->targets, QB_LOG_SYSLOG);        /* Turn on syslog too */
--
--    restore = qb_log_ctl(QB_LOG_SYSLOG, QB_LOG_CONF_STATE_GET, 0);
--    qb_log_ctl(QB_LOG_SYSLOG, QB_LOG_CONF_ENABLED, QB_TRUE);
-
-     for (; lpc < argc; lpc++) {
-         if (argv[lpc] == NULL) {
-@@ -958,7 +951,6 @@ crm_log_args(int argc, char **argv)
-     }
-
-     qb_log_from_external_source(__func__, __FILE__, ARGS_FMT, LOG_NOTICE, line, 0, \
                arg_string);
--    qb_log_ctl(QB_LOG_SYSLOG, QB_LOG_CONF_ENABLED, restore);
-
-     free(arg_string);
- }
diff --git a/0034-Log-cibadmin-Default-once-again-to-LOG_CRIT.patch \
b/0034-Log-cibadmin-Default-once-again-to-LOG_CRIT.patch deleted file mode 100644
index ccc3f1e..0000000
--- a/0034-Log-cibadmin-Default-once-again-to-LOG_CRIT.patch
+++ /dev/null
@@ -1,21 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Mon, 28 Sep 2015 18:45:32 +1000
-Subject: [PATCH] Log: cibadmin: Default once again to LOG_CRIT
-
-(cherry picked from commit d0d6118cbee3eccb3467058eadd91e08d3f4a42f)
----
- tools/cibadmin.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/tools/cibadmin.c b/tools/cibadmin.c
-index c16d3c7..84531f8 100644
---- a/tools/cibadmin.c
-+++ b/tools/cibadmin.c
-@@ -214,6 +214,7 @@ main(int argc, char **argv)
-
-     crm_xml_init(); /* Sets buffer allocation strategy */
-     crm_log_cli_init("cibadmin");
-+    set_crm_log_level(LOG_CRIT);
-     crm_set_options(NULL, "command [options] [data]", long_options,
-                     "Provides direct access to the cluster configuration."
-                     "\n\nAllows the configuration, or sections of it, to be \
                queried, modified, replaced and deleted."
diff --git a/0035-Fix-crm_resource-Correctly-update-existing-meta-attr.patch \
b/0035-Fix-crm_resource-Correctly-update-existing-meta-attr.patch deleted file mode \
100644 index 33670ac..0000000
--- a/0035-Fix-crm_resource-Correctly-update-existing-meta-attr.patch
+++ /dev/null
@@ -1,87 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Wed, 30 Sep 2015 17:33:00 +1000
-Subject: [PATCH] Fix: crm_resource: Correctly update existing meta attributes
- regardless of their position in the heirarchy
-
-(cherry picked from commit f367348c832c64e2dc480dc96d2e0c2aa88639ba)
-
-Conflicts:
-	tools/crm_resource_runtime.c
----
- tools/crm_resource_runtime.c | 44 ++++++++++++++++++++++++++++++++++++--------
- 1 file changed, 36 insertions(+), 8 deletions(-)
-
-diff --git a/tools/crm_resource_runtime.c b/tools/crm_resource_runtime.c
-index ce9db01..a04adb9 100644
---- a/tools/crm_resource_runtime.c
-+++ b/tools/crm_resource_runtime.c
-@@ -213,10 +213,11 @@ cli_resource_update_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-     }
-
-     if (safe_str_eq(attr_set_type, XML_TAG_ATTR_SETS)) {
--        rc = find_resource_attr(cib, XML_ATTR_ID, uber_parent(rsc)->id, \
                XML_TAG_META_SETS, attr_set, attr_id,
--                                attr_name, &local_attr_id);
--        if(rc == pcmk_ok && do_force == FALSE) {
--            if (BE_QUIET == FALSE) {
-+       if (do_force == FALSE) {
-+            rc = find_resource_attr(cib, XML_ATTR_ID, uber_parent(rsc)->id,
-+                                    XML_TAG_META_SETS, attr_set, attr_id,
-+                                    attr_name, &local_attr_id);
-+            if (rc == pcmk_ok && BE_QUIET == FALSE) {
-                 printf("WARNING: There is already a meta attribute for '%s' called \
                '%s' (id=%s)\n",
-                        uber_parent(rsc)->id, attr_name, local_attr_id);
-                 printf("         Delete '%s' first or use --force to override\n", \
                local_attr_id);
-@@ -224,7 +225,7 @@ cli_resource_update_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-             return -ENOTUNIQ;
-         }
-
--    } else if(rsc->parent) {
-+    } else if(rsc->parent && do_force == FALSE) {
-
-         switch(rsc->parent->variant) {
-             case pe_group:
-@@ -234,14 +235,41 @@ cli_resource_update_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-                 break;
-             case pe_master:
-             case pe_clone:
--                rsc = rsc->parent;
--                if (BE_QUIET == FALSE) {
--                    printf("Updating '%s' for '%s'...\n", rsc->id, rsc_id);
-+
-+                rc = find_resource_attr(cib, XML_ATTR_ID, rsc_id, attr_set_type, \
                attr_set, attr_id, attr_name, &local_attr_id);
-+                free(local_attr_id);
-+
-+                if(rc != pcmk_ok) {
-+                    rsc = rsc->parent;
-+                    if (BE_QUIET == FALSE) {
-+                        printf("Updating '%s' on '%s', the parent of '%s'\n", \
                attr_name, rsc->id, rsc_id);
-+                    }
-                 }
-                 break;
-             default:
-                 break;
-         }
-+
-+    } else if (rsc->parent && BE_QUIET == FALSE) {
-+        printf("Forcing update of '%s' for '%s' instead of '%s'\n", attr_name, \
                rsc_id, rsc->parent->id);
-+
-+    } else if(rsc->parent == NULL && rsc->children) {
-+        resource_t *child = rsc->children->data;
-+
-+        if(child->variant == pe_native) {
-+            lookup_id = clone_strip(child->id); /* Could be a cloned group! */
-+            rc = find_resource_attr(cib, XML_ATTR_ID, lookup_id, attr_set_type, \
                attr_set, attr_id, attr_name, &local_attr_id);
-+
-+            if(rc == pcmk_ok) {
-+                rsc = child;
-+                if (BE_QUIET == FALSE) {
-+                    printf("A value for '%s' already exists in child '%s', updating \
                that instead of '%s'\n", attr_name, lookup_id, rsc_id);
-+                }
-+            }
-+
-+            free(local_attr_id);
-+            free(lookup_id);
-+        }
-     }
-
-     lookup_id = clone_strip(rsc->id); /* Could be a cloned group! */
diff --git a/0036-Log-crm_resource-restart-Improved-user-feedback-on-f.patch \
b/0036-Log-crm_resource-restart-Improved-user-feedback-on-f.patch deleted file mode \
100644 index 4dded82..0000000
--- a/0036-Log-crm_resource-restart-Improved-user-feedback-on-f.patch
+++ /dev/null
@@ -1,27 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Mon, 5 Oct 2015 12:27:59 +1100
-Subject: [PATCH] Log: crm_resource --restart: Improved user feedback on
- failure
-
-(cherry picked from commit b557a39973a1fb85b2791be67dc03cfd32c22d89)
----
- tools/crm_resource_runtime.c | 6 ++++++
- 1 file changed, 6 insertions(+)
-
-diff --git a/tools/crm_resource_runtime.c b/tools/crm_resource_runtime.c
-index a04adb9..878fd0b 100644
---- a/tools/crm_resource_runtime.c
-+++ b/tools/crm_resource_runtime.c
-@@ -1040,6 +1040,12 @@ cli_resource_restart(resource_t * rsc, const char *host, int \
                timeout_ms, cib_t *
-     pe_working_set_t data_set;
-
-     if(resource_is_running_on(rsc, host) == FALSE) {
-+        const char *id = rsc->clone_name?rsc->clone_name:rsc->id;
-+        if(host) {
-+            printf("%s is not running on %s and so cannot be restarted\n", id, \
                host);
-+        } else {
-+            printf("%s is not running anywhere and so cannot be restarted\n", id);
-+        }
-         return -ENXIO;
-     }
-
diff --git a/0037-Fix-crm_resource-Correctly-delete-existing-meta-attr.patch \
b/0037-Fix-crm_resource-Correctly-delete-existing-meta-attr.patch deleted file mode \
100644 index 5699706..0000000
--- a/0037-Fix-crm_resource-Correctly-delete-existing-meta-attr.patch
+++ /dev/null
@@ -1,179 +0,0 @@
-From: "Gao,Yan" <ygao@suse.com>
-Date: Wed, 30 Sep 2015 16:59:43 +0200
-Subject: [PATCH] Fix: crm_resource: Correctly delete existing meta attributes
- regardless of their position in the heirarchy
-
-Use the same logics as "--set-parameter" for "--delete-parameter".
-
-(cherry picked from commit cdee10c7310ab433b006126bc087f6b8dff3843e)
-
-Conflicts:
-	tools/crm_resource_runtime.c
----
- tools/crm_resource_runtime.c | 109 ++++++++++++++++++++++---------------------
- 1 file changed, 55 insertions(+), 54 deletions(-)
-
-diff --git a/tools/crm_resource_runtime.c b/tools/crm_resource_runtime.c
-index 878fd0b..2d51e88 100644
---- a/tools/crm_resource_runtime.c
-+++ b/tools/crm_resource_runtime.c
-@@ -190,47 +190,20 @@ find_resource_attr(cib_t * the_cib, const char *attr, const \
                char *rsc, const cha
-     return rc;
- }
-
--int
--cli_resource_update_attribute(const char *rsc_id, const char *attr_set, const char \
                *attr_id,
--                  const char *attr_name, const char *attr_value, bool recursive,
--                  cib_t * cib, pe_working_set_t * data_set)
-+static resource_t *
-+find_matching_attr_resource(resource_t * rsc, const char * rsc_id, const char * \
                attr_set, const char * attr_id,
-+                            const char * attr_name, cib_t * cib, const char * cmd)
- {
-     int rc = pcmk_ok;
--    static bool need_init = TRUE;
--
-     char *lookup_id = NULL;
-     char *local_attr_id = NULL;
--    char *local_attr_set = NULL;
--
--    xmlNode *xml_top = NULL;
--    xmlNode *xml_obj = NULL;
--
--    bool use_attributes_tag = FALSE;
--    resource_t *rsc = find_rsc_or_clone(rsc_id, data_set);
--
--    if (rsc == NULL) {
--        return -ENXIO;
--    }
--
--    if (safe_str_eq(attr_set_type, XML_TAG_ATTR_SETS)) {
--       if (do_force == FALSE) {
--            rc = find_resource_attr(cib, XML_ATTR_ID, uber_parent(rsc)->id,
--                                    XML_TAG_META_SETS, attr_set, attr_id,
--                                    attr_name, &local_attr_id);
--            if (rc == pcmk_ok && BE_QUIET == FALSE) {
--                printf("WARNING: There is already a meta attribute for '%s' called \
                '%s' (id=%s)\n",
--                       uber_parent(rsc)->id, attr_name, local_attr_id);
--                printf("         Delete '%s' first or use --force to override\n", \
                local_attr_id);
--            }
--            return -ENOTUNIQ;
--        }
-
--    } else if(rsc->parent && do_force == FALSE) {
-+    if(rsc->parent && do_force == FALSE) {
-
-         switch(rsc->parent->variant) {
-             case pe_group:
-                 if (BE_QUIET == FALSE) {
--                    printf("Updating '%s' for '%s' will not apply to its peers in \
                '%s'\n", attr_name, rsc_id, rsc->parent->id);
-+                    printf("Performing %s of '%s' for '%s' will not apply to its \
                peers in '%s'\n", cmd, attr_name, rsc_id, rsc->parent->id);
-                 }
-                 break;
-             case pe_master:
-@@ -242,7 +215,7 @@ cli_resource_update_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-                 if(rc != pcmk_ok) {
-                     rsc = rsc->parent;
-                     if (BE_QUIET == FALSE) {
--                        printf("Updating '%s' on '%s', the parent of '%s'\n", \
                attr_name, rsc->id, rsc_id);
-+                        printf("Performing %s of '%s' on '%s', the parent of \
                '%s'\n", cmd, attr_name, rsc->id, rsc_id);
-                     }
-                 }
-                 break;
-@@ -251,7 +224,7 @@ cli_resource_update_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-         }
-
-     } else if (rsc->parent && BE_QUIET == FALSE) {
--        printf("Forcing update of '%s' for '%s' instead of '%s'\n", attr_name, \
                rsc_id, rsc->parent->id);
-+        printf("Forcing %s of '%s' for '%s' instead of '%s'\n", cmd, attr_name, \
                rsc_id, rsc->parent->id);
-
-     } else if(rsc->parent == NULL && rsc->children) {
-         resource_t *child = rsc->children->data;
-@@ -263,7 +236,7 @@ cli_resource_update_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-             if(rc == pcmk_ok) {
-                 rsc = child;
-                 if (BE_QUIET == FALSE) {
--                    printf("A value for '%s' already exists in child '%s', updating \
                that instead of '%s'\n", attr_name, lookup_id, rsc_id);
-+                    printf("A value for '%s' already exists in child '%s', \
                performing %s on that instead of '%s'\n", attr_name, lookup_id, cmd, \
                rsc_id);
-                 }
-             }
-
-@@ -272,6 +245,51 @@ cli_resource_update_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-         }
-     }
-
-+    return rsc;
-+}
-+
-+int
-+cli_resource_update_attribute(const char *rsc_id, const char *attr_set, const char \
                *attr_id,
-+                  const char *attr_name, const char *attr_value, bool recursive,
-+                  cib_t * cib, pe_working_set_t * data_set)
-+{
-+    int rc = pcmk_ok;
-+    static bool need_init = TRUE;
-+
-+    char *lookup_id = NULL;
-+    char *local_attr_id = NULL;
-+    char *local_attr_set = NULL;
-+
-+    xmlNode *xml_top = NULL;
-+    xmlNode *xml_obj = NULL;
-+
-+    bool use_attributes_tag = FALSE;
-+    resource_t *rsc = find_rsc_or_clone(rsc_id, data_set);
-+
-+    if (rsc == NULL) {
-+        return -ENXIO;
-+    }
-+
-+    if (safe_str_eq(attr_set_type, XML_TAG_ATTR_SETS)) {
-+        if (do_force == FALSE) {
-+            rc = find_resource_attr(cib, XML_ATTR_ID, uber_parent(rsc)->id,
-+                                    XML_TAG_META_SETS, attr_set, attr_id,
-+                                    attr_name, &local_attr_id);
-+            if (rc == pcmk_ok && BE_QUIET == FALSE) {
-+                printf("WARNING: There is already a meta attribute for '%s' called \
                '%s' (id=%s)\n",
-+                       uber_parent(rsc)->id, attr_name, local_attr_id);
-+                printf("         Delete '%s' first or use --force to override\n", \
                local_attr_id);
-+            }
-+            free(local_attr_id);
-+            if (rc == pcmk_ok) {
-+                return -ENOTUNIQ;
-+            }
-+        }
-+
-+    } else {
-+        rsc = find_matching_attr_resource(rsc, rsc_id, attr_set, attr_id, \
                attr_name, cib, "update");
-+    }
-+
-     lookup_id = clone_strip(rsc->id); /* Could be a cloned group! */
-     rc = find_resource_attr(cib, XML_ATTR_ID, lookup_id, attr_set_type, attr_set, \
                attr_id, attr_name,
-                             &local_attr_id);
-@@ -401,25 +419,8 @@ cli_resource_delete_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-         return -ENXIO;
-     }
-
--    if(rsc->parent && safe_str_eq(attr_set_type, XML_TAG_META_SETS)) {
--
--        switch(rsc->parent->variant) {
--            case pe_group:
--                if (BE_QUIET == FALSE) {
--                    printf("Removing '%s' for '%s' will not apply to its peers in \
                '%s'\n", attr_name, rsc_id, rsc->parent->id);
--                }
--                break;
--            case pe_master:
--            case pe_clone:
--                rsc = rsc->parent;
--                if (BE_QUIET == FALSE) {
--                    printf("Removing '%s' from '%s' for '%s'...\n", attr_name, \
                rsc->id, rsc_id);
--                }
--                break;
--            default:
--                break;
--        }
--
-+    if(safe_str_eq(attr_set_type, XML_TAG_META_SETS)) {
-+        rsc = find_matching_attr_resource(rsc, rsc_id, attr_set, attr_id, \
                attr_name, cib, "delete");
-     }
-
-     lookup_id = clone_strip(rsc->id);
diff --git a/0038-Fix-crm_resource-Correctly-observe-force-when-deleti.patch \
b/0038-Fix-crm_resource-Correctly-observe-force-when-deleti.patch deleted file mode \
100644 index f5aaaea..0000000
--- a/0038-Fix-crm_resource-Correctly-observe-force-when-deleti.patch
+++ /dev/null
@@ -1,75 +0,0 @@
-From: Andrew Beekhof <andrew@beekhof.net>
-Date: Thu, 8 Oct 2015 13:38:07 +1100
-Subject: [PATCH] Fix: crm_resource: Correctly observe --force when deleting
- and updating attributes
-
-(cherry picked from commit bd232e36403ea807635cabd336d8bb3101710891)
----
- tools/crm_resource_runtime.c | 25 +++++++++++++++++++++----
- 1 file changed, 21 insertions(+), 4 deletions(-)
-
-diff --git a/tools/crm_resource_runtime.c b/tools/crm_resource_runtime.c
-index 2d51e88..c3f5275 100644
---- a/tools/crm_resource_runtime.c
-+++ b/tools/crm_resource_runtime.c
-@@ -123,8 +123,9 @@ find_resource_attr(cib_t * the_cib, const char *attr, const char \
                *rsc, const cha
-     xmlNode *xml_search = NULL;
-     char *xpath_string = NULL;
-
--    CRM_ASSERT(value != NULL);
--    *value = NULL;
-+    if(value) {
-+        *value = NULL;
-+    }
-
-     if(the_cib == NULL) {
-         return -ENOTCONN;
-@@ -176,7 +177,7 @@ find_resource_attr(cib_t * the_cib, const char *attr, const char \
                *rsc, const cha
-                    crm_element_value(child, XML_NVPAIR_ATTR_VALUE), ID(child));
-         }
-
--    } else {
-+    } else if(value) {
-         const char *tmp = crm_element_value(xml_search, attr);
-
-         if (tmp) {
-@@ -198,8 +199,10 @@ find_matching_attr_resource(resource_t * rsc, const char * \
                rsc_id, const char *
-     char *lookup_id = NULL;
-     char *local_attr_id = NULL;
-
--    if(rsc->parent && do_force == FALSE) {
-+    if(do_force == TRUE) {
-+        return rsc;
-
-+    } else if(rsc->parent) {
-         switch(rsc->parent->variant) {
-             case pe_group:
-                 if (BE_QUIET == FALSE) {
-@@ -270,6 +273,13 @@ cli_resource_update_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-         return -ENXIO;
-     }
-
-+    if(attr_id == NULL
-+       && do_force == FALSE
-+       && pcmk_ok != find_resource_attr(
-+           cib, XML_ATTR_ID, uber_parent(rsc)->id, NULL, NULL, NULL, attr_name, \
                NULL)) {
-+        printf("\n");
-+    }
-+
-     if (safe_str_eq(attr_set_type, XML_TAG_ATTR_SETS)) {
-         if (do_force == FALSE) {
-             rc = find_resource_attr(cib, XML_ATTR_ID, uber_parent(rsc)->id,
-@@ -419,6 +429,13 @@ cli_resource_delete_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-         return -ENXIO;
-     }
-
-+    if(attr_id == NULL
-+       && do_force == FALSE
-+       && find_resource_attr(
-+           cib, XML_ATTR_ID, uber_parent(rsc)->id, NULL, NULL, NULL, attr_name, \
                NULL) != pcmk_ok) {
-+        printf("\n");
-+    }
-+
-     if(safe_str_eq(attr_set_type, XML_TAG_META_SETS)) {
-         rsc = find_matching_attr_resource(rsc, rsc_id, attr_set, attr_id, \
                attr_name, cib, "delete");
-     }
diff --git a/bz1297985-fix-configure-curses-test.patch \
b/bz1297985-fix-configure-curses-test.patch new file mode 100644
index 0000000..48b475b
--- /dev/null
+++ b/bz1297985-fix-configure-curses-test.patch
@@ -0,0 +1,23 @@
+--- a/configure.ac	2016-01-18 17:40:43.096090475 +0100
++++ b/configure.ac	2016-01-18 17:54:09.426193896 +0100
+@@ -941,12 +941,18 @@
+
+ dnl Check for printw() prototype compatibility
+ if test X"$CURSESLIBS" != X"" && cc_supports_flag -Wcast-qual && cc_supports_flag \
-Werror; then +-    AC_MSG_CHECKING(whether printw() requires argument of "const char \
*") +     ac_save_LIBS=$LIBS
+-    LIBS="$CURSESLIBS  $LIBS"
++    LIBS="$CURSESLIBS"
+     ac_save_CFLAGS=$CFLAGS
+     CFLAGS="-Wcast-qual -Werror"
++    # avoid broken test because of hardened build environment in Fedora 23+
++    # - https://fedoraproject.org/wiki/Changes/Harden_All_Packages
++    # - https://bugzilla.redhat.com/1297985
++    if cc_supports_flag -fPIC; then
++        CFLAGS="$CFLAGS -fPIC"
++    fi
+
++    AC_MSG_CHECKING(whether printw() requires argument of "const char *")
+     AC_LINK_IFELSE(
+ 	    [AC_LANG_PROGRAM(
+ 	      [
diff --git a/pacemaker-63f8e9a-rollup.patch b/pacemaker-63f8e9a-rollup.patch
deleted file mode 100644
index ef14d87..0000000
--- a/pacemaker-63f8e9a-rollup.patch
+++ /dev/null
@@ -1,5904 +0,0 @@
-diff --git a/ChangeLog b/ChangeLog
-index d70edbd..e445890 100644
---- a/ChangeLog
-+++ b/ChangeLog
-@@ -1,4 +1,218 @@
-
-+* Wed Jun 24 2015 Andrew Beekhof <andrew@beekhof.net> Pacemaker-1.1.13-1
-+- Update source tarball to revision: 2a1847e
-+- Changesets: 750
-+- Diff:       156 files changed, 11323 insertions(+), 3725 deletions(-)
-+
-+- Features added since Pacemaker-1.1.12
-+  + Allow fail-counts to be removed en-mass when the new attrd is in operation
-+  + attrd supports private attributes (not written to CIB)
-+  + crmd: Ensure a watchdog device is in use if stonith-watchdog-timeout is \
                configured
-+  + crmd: If configured, trigger the watchdog immediately if we loose quorum and \
                no-quorum-policy=suicide
-+  + crm_diff: Support generating a difference without versions details if \
                --no-version/-u is supplied
-+  + crm_resource: Implement an intelligent restart capability
-+  + Fencing: Advertise the watchdog device for fencing operations
-+  + Fencing: Allow the cluster to recover resources if the watchdog is in use
-+  + fencing: cl#5134 - Support random fencing delay to avoid double fencing
-+  + mcp: Allow orphan children to initiate node panic via SIGQUIT
-+  + mcp: Turn on sbd integration if pacemakerd finds it running
-+  + mcp: Two new error codes that result in machine reset or power off
-+  + Officially support the resource-discovery attribute for location constraints
-+  + PE: Allow natural ordering of colocation sets
-+  + PE: Support non-actionable degraded mode for OCF
-+  + pengine: cl#5207 - Display "UNCLEAN" for resources running on unclean offline \
                nodes
-+  + remote: pcmk remote client tool for use with container wrapper script
-+  + Support machine panics for some kinds of errors (via sbd if available)
-+  + tools: add crm_resource --wait option
-+  + tools: attrd_updater supports --query and --all options
-+  + tools: attrd_updater: Allow attributes to be set for other nodes
-+
-+- Changes since Pacemaker-1.1.12
-+  + pengine: exclusive discovery implies rsc is only allowed on exclusive subset of \
                nodes
-+  + acl: Correctly implement the 'reference' acl directive
-+  + acl: Do not delay evaluation of added nodes in some situations
-+  + attrd: b22b1fe did uuid test too early
-+  + attrd: Clean out the node cache when requested by the admin
-+  + attrd: fixes double free in attrd legacy
-+  + attrd: properly write attributes for peers once uuid is discovered
-+  + attrd: refresh should force an immediate write-out of all attributes
-+  + attrd: Simplify how node deletions happen
-+  + Bug rhbz#1067544 - Tools: Correctly handle --ban, --move and --locate for \
                master/slave groups
-+  + Bug rhbz#1181824 - Ensure the DC can be reliably fenced
-+  + cib: Ability to upgrade cib validation schema in legacy mode
-+  + cib: Always generate digests for cib diffs in legacy mode
-+  + cib: assignment where comparison intended
-+  + cib: Avoid nodeid conflicts we don't care about
-+  + cib: Correctly add "update-origin", "update-client" and "update-user" \
                attributes for cib
-+  + cib: Correctly set up signal handlers
-+  + cib: Correctly track node state
-+  + cib: Do not update on disk backups if we're just querying them
-+  + cib: Enable cib legacy mode for plugin-based clusters
-+  + cib: Ensure file-based backends treat '-o section' consistently with the native \
                backend
-+  + cib: Ensure upgrade operations from a non-DC get an acknowledgement
-+  + cib: No need to enforce cib digests for v2 diffs in legacy mode
-+  + cib: Revert d153b86 to instantly get cib synchronized in legacy mode
-+  + cib: tls sock cleanup for remote cib connections
-+  + cli: Ensure subsequent unknown long options are correctly detected
-+  + cluster: Invoke crm_remove_conflicting_peer() only when the new node's uname is \
                being assigned in the node cache
-+  + common: Increment current and age for lib common as a result of APIs being \
                added
-+  + corosync:  Bug cl#5232 - Somewhat gracefully handle nodes with invalid UUIDs
-+  + corosync: Avoid unnecessary repeated CMAP API calls
-+  + crmd/pengine: handle on-fail=ignore properly
-+  + crmd: Add "on_node" attribute for *_last_failure_0 lrm resource operations
-+  + crmd: All peers need to track node shutdown requests
-+  + crmd: Cached copies of transient attributes cease to be valid once a node \
                leaves the membership
-+  + crmd: Correctly add the local option that validates against schema for pengine \
                to calculate
-+  + crmd: Disable debug logging that results in significant overhead
-+  + crmd: do not remove connection resources during re-probe
-+  + crmd: don't update fail count twice for same failure
-+  + crmd: Ensure remote connection resources timeout properly during 'migrate_from' \
                action
-+  + crmd: Ensure throttle_mode() does something on Linux
-+  + crmd: Fixes crash when remote connection migration fails
-+  + crmd: gracefully handle remote node disconnects during op execution
-+  + crmd: Handle remote connection failures while executing ops on remote \
                connection
-+  + crmd: include remote nodes when forcing cluster wide resource reprobe
-+  + crmd: never stop recurring monitor ops for pcmk remote during incomplete \
                migration
-+  + crmd: Prevent the old version of DC from being fenced when it shuts down for \
                rolling-upgrade
-+  + crmd: Prevent use-of-NULL during reprobe
-+  + crmd: properly update job limit for baremetal remote-nodes
-+  + crmd: Remote-node throttle jobs count towards cluster-node hosting conneciton \
                rsc
-+  + crmd: Reset stonith failcount to recover transitioner when the node rejoins
-+  + crmd: resolves memory leak in crmd.
-+  + crmd: respect start-failure-is-fatal even for artifically injected events
-+  + crmd: Wait for all pending operations to complete before poking the policy \
                engine
-+  + crmd: When container's host is fenced, cancel in-flight operations
-+  + crm_attribute: Correctly update config options when -o crm_config is specified
-+  + crm_failcount: Better error reporting when no resource is specified
-+  + crm_mon: add exit reason to resource failure output
-+  + crm_mon: Fill CRM_notify_node in traps with node's uname rather than node's id \
                if possible
-+  + crm_mon: Repair notification delivery when the v2 patch format is in use
-+  + crm_node: Correctly remove nodes from the CIB by nodeid
-+  + crm_report: More patterns for finding logs on non-DC nodes
-+  + crm_resource: Allow resource restart operations to be node specific
-+  + crm_resource: avoid deletion of lrm cache on node with resource discovery \
                disabled.
-+  + crm_resource: Calculate how long to wait for a restart based on the resource \
                timeouts
-+  + crm_resource: Clean up memory in --restart error paths
-+  + crm_resource: Display the locations of all anonymous clone children when \
                supplying the children's common ID
-+  + crm_resource: Ensure --restart sets/clears meta attributes
-+  + crm_resource: Ensure fail-counts are purged when we redetect the state of all \
                resources
-+  + crm_resource: Implement --timeout for resource restart operations
-+  + crm_resource: Include group members when calculating the next timeout
-+  + crm_resource: Memory leak in error paths
-+  + crm_resource: Prevent use-after-free
-+  + crm_resource: Repair regression test outputs
-+  + crm_resource: Use-after-free when restarting a resource
-+  + dbus: ref count leaks
-+  + dbus: Ensure both the read and write queues get dispatched
-+  + dbus: Fail gracefully if malloc fails
-+  + dbus: handle dispatch queue when multiple replies need to be processed
-+  + dbus: Notice when dbus connections get disabled
-+  + dbus: Remove double-free introduced while trying to make coverity shut up
-+  + ensure if B is colocated with A, B can never run without A
-+  + fence_legacy: Avoid passing 'port' to cluster-glue agents
-+  + fencing: Allow nodes to be purged from the member cache
-+  + fencing: Correctly make args for fencing agents
-+  + fencing: Correctly wait for self-fencing to occur when the watchdog is in use
-+  + fencing: Ensure the hostlist parameter is set for watchdog agents
-+  + fencing: Force 'stonith-ng' as the system name
-+  + fencing: Gracefully handle invalid metadata from agents
-+  + fencing: If configured, wait stonith-watchdog-timer seconds for self-fencing to \
                complete
-+  + fencing: Reject actions for devices that haven't been explicitly registered yet
-+  + ipc: properly allocate server enforced buffer size on client
-+  + ipc: use server enforced buffer during ipc client send
-+  + lrmd, services: interpret LSB status codes properly
-+  + lrmd: add back support for class heartbeat agents
-+  + lrmd: cancel pending async connection during disconnect
-+  + lrmd: enable ipc proxy for docker-wrapper privileged mode
-+  + lrmd: fix rescheduling of systemd monitor op during start
-+  + lrmd: Handle systemd reporting 'done' before a resource is actually stopped
-+  + lrmd: Hint to child processes that using sd_notify is not required
-+  + lrmd: Log with the correct personality
-+  + lrmd: Prevent glib assert triggered by timers being removed from mainloop more \
                than once
-+  + lrmd: report original timeout when systemd operation completes
-+  + lrmd: store failed operation exit reason in cib
-+  + mainloop: resolves race condition mainloop poll involving modification of ipc \
                connections
-+  + make targetted reprobe for remote node work, crm_resource -C -N <remote node>
-+  + mcp: Allow a configurable delay when debugging shutdown issues
-+  + mcp: Avoid requiring 'export' for SYS-V sysconfig options
-+  + Membership: Detect and resolve nodes that change their ID
-+  + pacemakerd: resolves memory leak of xml structure in pacemakerd
-+  + pengine: ability to launch resources in isolated containers
-+  + pengine: add #kind=remote for baremetal remote-nodes
-+  + pengine: allow baremetal remote-nodes to recover without requiring fencing when \
                cluster-node fails
-+  + pengine: allow remote-nodes to be placed in maintenance mode
-+  + pengine: Avoid trailing whitespaces when printing resource state
-+  + pengine: cl#5130 - Choose nodes capable of running all the colocated \
                utilization resources
-+  + pengine: cl#5130 - Only check the capacities of the nodes that are allowed to \
                run the resource
-+  + pengine: Correctly compare feature set to determine how to unpack meta \
                attributes
-+  + pengine: disable migrations for resources with isolation containers
-+  + pengine: disable reloading of resources within isolated container wrappers
-+  + pengine: Do not aggregate children in a pending state into the \
                started/stopped/etc lists
-+  + pengine: Do not record duplicate copies of the failed actions
-+  + pengine: Do not reschedule monitors that are no longer needed while resource \
                definitions have changed
-+  + pengine: Fence baremetal remote when recurring monitor op fails
-+  + pengine: Fix colocation with unmanaged resources
-+  + pengine: Fix the behaviors of multi-state resources with asymmetrical ordering
-+  + pengine: fixes pengine crash with orphaned remote node connection resource
-+  + pengine: fixes segfault caused by malformed log warning
-+  + pengine: handle cloned isolated resources in a sane way
-+  + pengine: handle isolated resource scenario, cloned group of isolated resources
-+  + pengine: Handle ordering between stateful and migratable resources
-+  + pengine: imply stop in container node resources when host node is fenced
-+  + pengine: only fence baremetal remote when connection can fails or can not be \
                recovered
-+  + pengine: only kill process group on timeout when on-fail does not equal block.
-+  + pengine: per-node control over resource discovery
-+  + pengine: prefer migration target for remote node connections
-+  + pengine: prevent disabling rsc discovery per node in certain situations
-+  + pengine: Prevent use-after-free in sort_rsc_process_order()
-+  + pengine: properly handle ordering during remote connection partial migration
-+  + pengine: properly recover remote-nodes when cluster-node proxy goes offline
-+  + pengine: remove unnecessary whitespace from notify environment variables
-+  + pengine: require-all feature for ordered clones
-+  + pengine: Resolve memory leaks
-+  + pengine: resource discovery mode for location constraints
-+  + pengine: restart master instances on instance attribute changes
-+  + pengine: Turn off legacy unpacking of resource options into the meta hashtable
-+  + pengine: Watchdog integration is sufficient for fencing
-+  + Perform systemd reloads asynchronously
-+  + ping: Correctly advertise multiplier default
-+  + Prefer to inherit the  watchdog timeout from SBD
-+  + properly record stop args after reload
-+  + provide fake meta data for ra class heartbeat
-+  + remote: report timestamps for remote connection resource operations
-+  + remote: Treat recv msg timeout as a disconnect
-+  + service: Prevent potential use-of-NULL in metadata lookups
-+  + solaris: Allow compilation when dirent.d_type is not available
-+  + solaris: Correctly replace the linux swab functions
-+  + solaris: Disable throttling since /proc doesn't exist
-+  + stonith-ng: Correctly observe the watchdog completion timeout
-+  + stonith-ng: Correctly track node state
-+  + stonith-ng: Reset mainloop source IDs after removing them
-+  + systemd: Correctly handle long running stop actions
-+  + systemd: Ensure failed monitor operations always return
-+  + systemd: Ensure we don't call dbus_message_unref() with NULL
-+  + systemd: fix crash caused when canceling in-flight operation
-+  + systemd: Kindly ask dbus NOT to kill the process if the dbus connection fails
-+  + systemd: Perform actions asynchronously
-+  + systemd: Perform monitor operations without blocking
-+  + systemd: Tell systemd not to take DBus down from underneath us
-+  + systemd: Trick systemd into not stopping our services before us during shutdown
-+  + tools: Improve crm_mon output with certain option combinations
-+  + upstart: Monitor actions always return 'ok' or 'not running'
-+  + upstart: Perform more parts of monitor operations without blocking
-+  + xml: add 'require-all' to xml schema for constraints
-+  + xml: cl#5231 - Unset the deleted attributes in the resulting diffs
-+  + xml: Clone the latest constraint schema in preparation for changes"
-+  + xml: Correctly create v1 patchsets when deleting attributes
-+  + xml: Do not change the ordering of properties when applying v1 cib diffs
-+  + xml: Do not dump deleted attributes
-+  + xml: Do not prune leaves from v1 cib diffs that are being created with digests
-+  + xml: Ensure ACLs are reapplied before calculating what a replace operation \
                changed
-+  + xml: Fix upgrade-1.3.xsl to correctly transform ACL rules with "attribute"
-+  + xml: Prevent assert errors in crm_element_value() on applying a patch without \
                version information
-+  + xml: Prevent potential use-of-NULL
-+
-+
- * Tue Jul 22 2014 Andrew Beekhof <andrew@beekhof.net> Pacemaker-1.1.12-1
- - Update source tarball to revision: 93a037d
- - Changesets: 795
-diff --git a/attrd/commands.c b/attrd/commands.c
-index 442c5f8..18c0523 100644
---- a/attrd/commands.c
-+++ b/attrd/commands.c
-@@ -289,6 +289,9 @@ attrd_client_update(xmlNode *xml)
-
-             crm_info("Expanded %s=%s to %d", attr, value, int_value);
-             crm_xml_add_int(xml, F_ATTRD_VALUE, int_value);
-+
-+            /* Replacing the value frees the previous memory, so re-query it */
-+            value = crm_element_value(xml, F_ATTRD_VALUE);
-         }
-     }
-
-diff --git a/cib/callbacks.c b/cib/callbacks.c
-index 71c487e..1452ded 100644
---- a/cib/callbacks.c
-+++ b/cib/callbacks.c
-@@ -40,6 +40,8 @@
- #include <notify.h>
- #include "common.h"
-
-+static unsigned long cib_local_bcast_num = 0;
-+
- typedef struct cib_local_notify_s {
-     xmlNode *notify_src;
-     char *client_id;
-@@ -48,7 +50,13 @@ typedef struct cib_local_notify_s {
- } cib_local_notify_t;
-
- int next_client_id = 0;
-+
-+#if SUPPORT_PLUGIN
-+gboolean legacy_mode = TRUE;
-+#else
- gboolean legacy_mode = FALSE;
-+#endif
-+
- qb_ipcs_service_t *ipcs_ro = NULL;
- qb_ipcs_service_t *ipcs_rw = NULL;
- qb_ipcs_service_t *ipcs_shm = NULL;
-@@ -82,8 +90,12 @@ static gboolean cib_read_legacy_mode(void)
-     return legacy;
- }
-
--static gboolean cib_legacy_mode(void)
-+gboolean cib_legacy_mode(void)
- {
-+#if SUPPORT_PLUGIN
-+    return TRUE;
-+#endif
-+
-     if(cib_read_legacy_mode()) {
-         return TRUE;
-     }
-@@ -442,6 +454,54 @@ do_local_notify(xmlNode * notify_src, const char *client_id,
- }
-
- static void
-+local_notify_destroy_callback(gpointer data)
-+{
-+    cib_local_notify_t *notify = data;
-+
-+    free_xml(notify->notify_src);
-+    free(notify->client_id);
-+    free(notify);
-+}
-+
-+static void
-+check_local_notify(int bcast_id)
-+{
-+    cib_local_notify_t *notify = NULL;
-+
-+    if (!local_notify_queue) {
-+        return;
-+    }
-+
-+    notify = g_hash_table_lookup(local_notify_queue, GINT_TO_POINTER(bcast_id));
-+
-+    if (notify) {
-+        do_local_notify(notify->notify_src, notify->client_id, notify->sync_reply,
-+                        notify->from_peer);
-+        g_hash_table_remove(local_notify_queue, GINT_TO_POINTER(bcast_id));
-+    }
-+}
-+
-+static void
-+queue_local_notify(xmlNode * notify_src, const char *client_id, gboolean \
                sync_reply,
-+                   gboolean from_peer)
-+{
-+    cib_local_notify_t *notify = calloc(1, sizeof(cib_local_notify_t));
-+
-+    notify->notify_src = notify_src;
-+    notify->client_id = strdup(client_id);
-+    notify->sync_reply = sync_reply;
-+    notify->from_peer = from_peer;
-+
-+    if (!local_notify_queue) {
-+        local_notify_queue = g_hash_table_new_full(g_direct_hash,
-+                                                   g_direct_equal, NULL,
-+                                                   local_notify_destroy_callback);
-+    }
-+
-+    g_hash_table_insert(local_notify_queue, GINT_TO_POINTER(cib_local_bcast_num), \
                notify);
-+}
-+
-+static void
- parse_local_options_v1(crm_client_t * cib_client, int call_type, int call_options, \
                const char *host,
-                     const char *op, gboolean * local_notify, gboolean * \
                needs_reply,
-                     gboolean * process, gboolean * needs_forward)
-@@ -814,9 +874,12 @@ send_peer_reply(xmlNode * msg, xmlNode * result_diff, const \
                char *originator, gb
-         int diff_del_admin_epoch = 0;
-
-         const char *digest = NULL;
-+        int format = 1;
-
-         CRM_LOG_ASSERT(result_diff != NULL);
-         digest = crm_element_value(result_diff, XML_ATTR_DIGEST);
-+        crm_element_value_int(result_diff, "format", &format);
-+
-         cib_diff_version_details(result_diff,
-                                  &diff_add_admin_epoch, &diff_add_epoch, \
                &diff_add_updates,
-                                  &diff_del_admin_epoch, &diff_del_epoch, \
                &diff_del_updates);
-@@ -829,7 +892,9 @@ send_peer_reply(xmlNode * msg, xmlNode * result_diff, const char \
                *originator, gb
-         crm_xml_add(msg, F_CIB_GLOBAL_UPDATE, XML_BOOLEAN_TRUE);
-         crm_xml_add(msg, F_CIB_OPERATION, CIB_OP_APPLY_DIFF);
-
--        CRM_ASSERT(digest != NULL);
-+        if (format == 1) {
-+            CRM_ASSERT(digest != NULL);
-+        }
-
-         add_message_xml(msg, F_CIB_UPDATE_DIFF, result_diff);
-         crm_log_xml_explicit(msg, "copy");
-@@ -1039,6 +1104,27 @@ cib_process_request(xmlNode * request, gboolean \
                force_synchronous, gboolean priv
-          */
-         crm_trace("Completed slave update");
-
-+    } else if (cib_legacy_mode() &&
-+               rc == pcmk_ok && result_diff != NULL && !(call_options & \
                cib_inhibit_bcast)) {
-+        gboolean broadcast = FALSE;
-+
-+        cib_local_bcast_num++;
-+        crm_xml_add_int(request, F_CIB_LOCAL_NOTIFY_ID, cib_local_bcast_num);
-+        broadcast = send_peer_reply(request, result_diff, originator, TRUE);
-+
-+        if (broadcast && client_id && local_notify && op_reply) {
-+
-+            /* If we have been asked to sync the reply,
-+             * and a bcast msg has gone out, we queue the local notify
-+             * until we know the bcast message has been received */
-+            local_notify = FALSE;
-+            crm_trace("Queuing local %ssync notification for %s",
-+                      (call_options & cib_sync_call) ? "" : "a-", client_id);
-+
-+            queue_local_notify(op_reply, client_id, (call_options & cib_sync_call), \
                from_peer);
-+            op_reply = NULL;    /* the reply is queued, so don't free here */
-+        }
-+
-     } else if (call_options & cib_discard_reply) {
-         crm_trace("Caller isn't interested in reply");
-
-@@ -1322,6 +1408,11 @@ cib_peer_callback(xmlNode * msg, void *private_data)
-
-     if (cib_legacy_mode() && (originator == NULL || crm_str_eq(originator, \
                cib_our_uname, TRUE))) {
-         /* message is from ourselves */
-+        int bcast_id = 0;
-+
-+        if (!(crm_element_value_int(msg, F_CIB_LOCAL_NOTIFY_ID, &bcast_id))) {
-+            check_local_notify(bcast_id);
-+        }
-         return;
-
-     } else if (crm_peer_cache == NULL) {
-diff --git a/cib/callbacks.h b/cib/callbacks.h
-index 7549a6c..bca9992 100644
---- a/cib/callbacks.h
-+++ b/cib/callbacks.h
-@@ -73,6 +73,8 @@ void cib_shutdown(int nsig);
- void initiate_exit(void);
- void terminate_cib(const char *caller, gboolean fast);
-
-+extern gboolean cib_legacy_mode(void);
-+
- #if SUPPORT_HEARTBEAT
- extern void cib_ha_peer_callback(HA_Message * msg, void *private_data);
- extern int cib_ccm_dispatch(gpointer user_data);
-diff --git a/cib/main.c b/cib/main.c
-index 2a48054..e20a2b6 100644
---- a/cib/main.c
-+++ b/cib/main.c
-@@ -438,6 +438,13 @@ cib_peer_update_callback(enum crm_status_type type, crm_node_t \
                * node, const voi
-
-     if (cib_shutdown_flag && crm_active_peers() < 2 && \
                crm_hash_table_size(client_connections) == 0) {
-         crm_info("No more peers");
-+        /* @TODO
-+         * terminate_cib() calls crm_cluster_disconnect() which calls
-+         * crm_peer_destroy() which destroys the peer caches, which a peer
-+         * status callback shouldn't do. For now, there is a workaround in
-+         * crm_update_peer_proc(), but CIB should be refactored to avoid
-+         * destroying the peer caches here.
-+         */
-         terminate_cib(__FUNCTION__, FALSE);
-     }
- }
-diff --git a/cib/messages.c b/cib/messages.c
-index 9c66349..363562c 100644
---- a/cib/messages.c
-+++ b/cib/messages.c
-@@ -297,7 +297,14 @@ cib_process_upgrade_server(const char *op, int options, const \
                char *section, xml
-             crm_xml_add(up, F_CIB_CALLOPTS, crm_element_value(req, \
                F_CIB_CALLOPTS));
-             crm_xml_add(up, F_CIB_CALLID, crm_element_value(req, F_CIB_CALLID));
-
--            send_cluster_message(NULL, crm_msg_cib, up, FALSE);
-+            if (cib_legacy_mode() && cib_is_master) {
-+                rc = cib_process_upgrade(
-+                    op, options, section, up, input, existing_cib, result_cib, \
                answer);
-+
-+            } else {
-+                send_cluster_message(NULL, crm_msg_cib, up, FALSE);
-+            }
-+
-             free_xml(up);
-
-         } else if(rc == pcmk_ok) {
-diff --git a/crmd/lrm.c b/crmd/lrm.c
-index 74fede4..062f769 100644
---- a/crmd/lrm.c
-+++ b/crmd/lrm.c
-@@ -454,8 +454,6 @@ get_rsc_metadata(const char *type, const char *rclass, const \
                char *provider, boo
-
-     snprintf(key, len, "%s::%s:%s", type, rclass, provider);
-     if(force == FALSE) {
--        snprintf(key, len, "%s::%s:%s", type, rclass, provider);
--
-         crm_trace("Retreiving cached metadata for %s", key);
-         metadata = g_hash_table_lookup(metadata_hash, key);
-     }
-@@ -581,7 +579,7 @@ resource_supports_action(xmlNode *metadata, const char *name)
-     for (action = __xml_first_child(actions); action != NULL; action = \
                __xml_next(action)) {
-         if (crm_str_eq((const char *)action->name, "action", TRUE)) {
-             value = crm_element_value(action, "name");
--            if (safe_str_eq("reload", value)) {
-+            if (safe_str_eq(name, value)) {
-                 return TRUE;
-             }
-         }
-@@ -606,16 +604,18 @@ append_restart_list(lrmd_event_data_t *op, xmlNode *metadata, \
                xmlNode * update,
-
-     if(resource_supports_action(metadata, "reload")) {
-         restart = create_xml_node(NULL, XML_TAG_PARAMS);
--        list = build_parameter_list(op, metadata, restart, "unique", FALSE, FALSE);
--    }
-+        /* Any parameters with unique="1" should be added into the \
                "op-force-restart" list. */
-+        list = build_parameter_list(op, metadata, restart, "unique", TRUE, FALSE);
-
--    if (list == NULL) {
-+    } else {
-         /* Resource does not support reloads */
-         return;
-     }
-
-     digest = calculate_operation_digest(restart, version);
--    crm_xml_add(update, XML_LRM_ATTR_OP_RESTART, list);
-+    /* Add "op-force-restart" and "op-restart-digest" to indicate the resource \
                supports reload,
-+     * no matter if it actually supports any parameters with unique="1"). */
-+    crm_xml_add(update, XML_LRM_ATTR_OP_RESTART, list? list: "");
-     crm_xml_add(update, XML_LRM_ATTR_RESTART_DIGEST, digest);
-
-     crm_trace("%s: %s, %s", op->rsc_id, digest, list);
-diff --git a/crmd/throttle.c b/crmd/throttle.c
-index 165050c..169594b 100644
---- a/crmd/throttle.c
-+++ b/crmd/throttle.c
-@@ -92,41 +92,60 @@ int throttle_num_cores(void)
-     return cores;
- }
-
-+/*
-+ * \internal
-+ * \brief Return name of /proc file containing the CIB deamon's load statistics
-+ *
-+ * \return Newly allocated memory with file name on success, NULL otherwise
-+ *
-+ * \note It is the caller's responsibility to free the return value.
-+ *       This will return NULL if the daemon is being run via valgrind.
-+ *       This should be called only on Linux systems.
-+ */
- static char *find_cib_loadfile(void)
- {
-     DIR *dp;
-     struct dirent *entry;
-     struct stat statbuf;
-     char *match = NULL;
-+    char procpath[128];
-+    char value[64];
-+    char key[16];
-
-     dp = opendir("/proc");
-     if (!dp) {
-         /* no proc directory to search through */
-         crm_notice("Can not read /proc directory to track existing components");
--        return FALSE;
-+        return NULL;
-     }
-
-+    /* Iterate through contents of /proc */
-     while ((entry = readdir(dp)) != NULL) {
--        char procpath[128];
--        char value[64];
--        char key[16];
-         FILE *file;
-         int pid;
-
--        strcpy(procpath, "/proc/");
--        /* strlen("/proc/") + strlen("/status") + 1 = 14
--         * 128 - 14 = 114 */
--        strncat(procpath, entry->d_name, 114);
--
--        if (lstat(procpath, &statbuf)) {
-+        /* We're only interested in entries whose name is a PID,
-+         * so skip anything non-numeric or that is too long.
-+         *
-+         * 114 = 128 - strlen("/proc/") - strlen("/status") - 1
-+         */
-+        pid = atoi(entry->d_name);
-+        if ((pid <= 0) || (strlen(entry->d_name) > 114)) {
-             continue;
-         }
--        if (!S_ISDIR(statbuf.st_mode) || !isdigit(entry->d_name[0])) {
-+
-+        /* We're only interested in subdirectories */
-+        strcpy(procpath, "/proc/");
-+        strcat(procpath, entry->d_name);
-+        if (lstat(procpath, &statbuf) || !S_ISDIR(statbuf.st_mode)) {
-             continue;
-         }
-
-+        /* Read the first entry ("Name:") from the process's status file.
-+         * We could handle the valgrind case if we parsed the cmdline file
-+         * instead, but that's more of a pain than it's worth.
-+         */
-         strcat(procpath, "/status");
--
-         file = fopen(procpath, "r");
-         if (!file) {
-             continue;
-@@ -137,17 +156,11 @@ static char *find_cib_loadfile(void)
-         }
-         fclose(file);
-
--        if (safe_str_neq("cib", value)) {
--            continue;
--        }
--
--        pid = atoi(entry->d_name);
--        if (pid <= 0) {
--            continue;
-+        if (safe_str_eq("cib", value)) {
-+            /* We found the CIB! */
-+            match = crm_strdup_printf("/proc/%d/stat", pid);
-+            break;
-         }
--
--        match = crm_strdup_printf("/proc/%d/stat", pid);
--        break;
-     }
-
-     closedir(dp);
-@@ -214,6 +227,10 @@ static bool throttle_cib_load(float *load)
-         last_utime = 0;
-         last_stime = 0;
-         loadfile = find_cib_loadfile();
-+        if (loadfile == NULL) {
-+            crm_warn("Couldn't find CIB load file");
-+            return FALSE;
-+        }
-         ticks_per_s = sysconf(_SC_CLK_TCK);
-         crm_trace("Found %s", loadfile);
-     }
-diff --git a/cts/CIB.py b/cts/CIB.py
-index cdfc7ca..82d02d7 100644
---- a/cts/CIB.py
-+++ b/cts/CIB.py
-@@ -312,7 +312,7 @@ Description=Dummy resource that takes a while to start
- Type=notify
- ExecStart=/usr/bin/python -c 'import time, systemd.daemon; time.sleep(10); \
                systemd.daemon.notify("READY=1"); time.sleep(86400)'
- ExecStop=/bin/sleep 10
--ExecStop=/bin/kill -s KILL $MAINPID
-+ExecStop=/bin/kill -s KILL \$MAINPID
- """
-
-             os.system("cat <<-END >/tmp/DummySD.service\n%s\nEND" % \
                (dummy_service_file))
-diff --git a/cts/CTStests.py b/cts/CTStests.py
-index 14ab4bf..f817004 100644
---- a/cts/CTStests.py
-+++ b/cts/CTStests.py
-@@ -1105,7 +1105,7 @@ class MaintenanceMode(CTSTest):
-         # fail the resource right after turning Maintenance mode on
-         # verify it is not recovered until maintenance mode is turned off
-         if action == "On":
--            pats.append("pengine.*: warning: Processing failed op %s for %s on" % \
                (self.action, self.rid))
-+            pats.append("pengine.*: warning:.* Processing failed op %s for %s on" % \
                (self.action, self.rid))
-         else:
-             pats.append(self.templates["Pat:RscOpOK"] % (self.rid, "stop_0"))
-             pats.append(self.templates["Pat:RscOpOK"] % (self.rid, "start_0"))
-@@ -1314,7 +1314,8 @@ class ResourceRecover(CTSTest):
-         self.debug("Shooting %s aka. %s" % (rsc.clone_id, rsc.id))
-
-         pats = []
--        pats.append("pengine.*: warning: Processing failed op %s for %s on" % \
                (self.action, self.rid))
-+        pats.append(r"pengine.*: warning:.* Processing failed op %s for (%s|%s) on" \
                % (self.action,
-+            rsc.id, rsc.clone_id))
-
-         if rsc.managed():
-             pats.append(self.templates["Pat:RscOpOK"] % (self.rid, "stop_0"))
-@@ -2647,32 +2648,31 @@ class RemoteDriver(CTSTest):
-         self.remote_node_added = 0
-         self.remote_rsc_added = 0
-         self.remote_rsc = "remote-rsc"
-+        self.remote_use_reconnect_interval = \
                self.Env.RandomGen.choice(["true","false"])
-         self.cib_cmd = """cibadmin -C -o %s -X '%s' """
-
--    def del_rsc(self, node, rsc):
--
-+    def get_othernode(self, node):
-         for othernode in self.Env["nodes"]:
-             if othernode == node:
-                 # we don't want to try and use the cib that we just shutdown.
-                 # find a cluster node that is not our soon to be remote-node.
-                 continue
--            rc = self.rsh(othernode, "crm_resource -D -r %s -t primitive" % (rsc))
--            if rc != 0:
--                self.fail_string = ("Removal of resource '%s' failed" % (rsc))
--                self.failed = 1
--            return
-+            else:
-+                return othernode
-+
-+    def del_rsc(self, node, rsc):
-+        othernode = self.get_othernode(node)
-+        rc = self.rsh(othernode, "crm_resource -D -r %s -t primitive" % (rsc))
-+        if rc != 0:
-+            self.fail_string = ("Removal of resource '%s' failed" % (rsc))
-+            self.failed = 1
-
-     def add_rsc(self, node, rsc_xml):
--        for othernode in self.CM.Env["nodes"]:
--            if othernode == node:
--                # we don't want to try and use the cib that we just shutdown.
--                # find a cluster node that is not our soon to be remote-node.
--                continue
--            rc = self.rsh(othernode, self.cib_cmd % ("resources", rsc_xml))
--            if rc != 0:
--                self.fail_string = "resource creation failed"
--                self.failed = 1
--            return
-+        othernode = self.get_othernode(node)
-+        rc = self.rsh(othernode, self.cib_cmd % ("resources", rsc_xml))
-+        if rc != 0:
-+            self.fail_string = "resource creation failed"
-+            self.failed = 1
-
-     def add_primitive_rsc(self, node):
-         rsc_xml = """
-@@ -2687,7 +2687,24 @@ class RemoteDriver(CTSTest):
-             self.remote_rsc_added = 1
-
-     def add_connection_rsc(self, node):
--        rsc_xml = """
-+        if self.remote_use_reconnect_interval == "true":
-+            # use reconnect interval and make sure to set cluster-recheck-interval \
                as well.
-+            rsc_xml = """
-+<primitive class="ocf" id="%s" provider="pacemaker" type="remote">
-+    <instance_attributes id="remote-instance_attributes"/>
-+        <instance_attributes id="remote-instance_attributes">
-+          <nvpair id="remote-instance_attributes-server" name="server" value="%s"/>
-+          <nvpair id="remote-instance_attributes-reconnect_interval" \
                name="reconnect_interval" value="60s"/>
-+        </instance_attributes>
-+    <operations>
-+      <op id="remote-monitor-interval-60s" interval="60s" name="monitor"/>
-+      <op id="remote-name-start-interval-0-timeout-120" interval="0" name="start" \
                timeout="60"/>
-+    </operations>
-+</primitive>""" % (self.remote_node, node)
-+            self.rsh(self.get_othernode(node), self.templates["SetCheckInterval"] % \
                ("45s"))
-+        else:
-+            # not using reconnect interval
-+            rsc_xml = """
- <primitive class="ocf" id="%s" provider="pacemaker" type="remote">
-     <instance_attributes id="remote-instance_attributes"/>
-         <instance_attributes id="remote-instance_attributes">
-@@ -2698,6 +2715,7 @@ class RemoteDriver(CTSTest):
-       <op id="remote-name-start-interval-0-timeout-120" interval="0" name="start" \
                timeout="120"/>
-     </operations>
- </primitive>""" % (self.remote_node, node)
-+
-         self.add_rsc(node, rsc_xml)
-         if self.failed == 0:
-             self.remote_node_added = 1
-@@ -2836,7 +2854,7 @@ class RemoteDriver(CTSTest):
-         self.CM.ns.WaitForNodeToComeUp(node, 120);
-
-         pats = [ ]
--        watch = self.create_watch(pats, 120)
-+        watch = self.create_watch(pats, 200)
-         watch.setwatch()
-         pats.append(self.templates["Pat:RscOpOK"] % (self.remote_node, "start"))
-         if self.remote_rsc_added == 1:
-@@ -2927,12 +2945,19 @@ class RemoteDriver(CTSTest):
-             pats.append(self.templates["Pat:RscOpOK"] % (self.remote_node, "stop"))
-
-         self.set_timer("remoteMetalCleanup")
-+
-+        if self.remote_use_reconnect_interval == "true":
-+            self.debug("Cleaning up re-check interval")
-+            self.rsh(self.get_othernode(node), \
                self.templates["ClearCheckInterval"])
-         if self.remote_rsc_added == 1:
-+            self.debug("Cleaning up dummy rsc put on remote node")
-             self.rsh(node, "crm_resource -U -r %s -N %s" % (self.remote_rsc, \
                self.remote_node))
-             self.del_rsc(node, self.remote_rsc)
-         if self.remote_node_added == 1:
-+            self.debug("Cleaning up remote node connection resource")
-             self.rsh(node, "crm_resource -U -r %s" % (self.remote_node))
-             self.del_rsc(node, self.remote_node)
-+
-         watch.lookforall()
-         self.log_timer("remoteMetalCleanup")
-
-diff --git a/cts/environment.py b/cts/environment.py
-index 6edf331..a3399c3 100644
---- a/cts/environment.py
-+++ b/cts/environment.py
-@@ -160,7 +160,7 @@ class Environment:
-             self.data["Stack"] = "heartbeat"
-
-         elif name == "openais" or name == "ais"  or name == "whitetank":
--            self.data["Stack"] = "openais (whitetank)"
-+            self.data["Stack"] = "corosync (plugin v0)"
-
-         elif name == "corosync" or name == "cs" or name == "mcp":
-             self.data["Stack"] = "corosync 2.x"
-@@ -351,6 +351,10 @@ class Environment:
-                     self["DoFencing"]=1
-                 elif args[i+1] == "0" or args[i+1] == "no":
-                     self["DoFencing"]=0
-+                elif args[i+1] == "phd":
-+                    self["DoStonith"]=1
-+                    self["stonith-type"] = "fence_phd_kvm"
-+                    self["stonith-params"] = "pcmk_arg_map=domain:uname,delay=0"
-                 elif args[i+1] == "rhcs" or args[i+1] == "xvm" or args[i+1] == \
                "virt":
-                     self["DoStonith"]=1
-                     self["stonith-type"] = "fence_xvm"
-diff --git a/cts/patterns.py b/cts/patterns.py
-index 8398c7e..1bc05a6 100644
---- a/cts/patterns.py
-+++ b/cts/patterns.py
-@@ -32,6 +32,9 @@ class BasePatterns:
-
-             "UUIDQueryCmd"    : "crmadmin -N",
-
-+            "SetCheckInterval"    : "cibadmin --modify -c --xml-text \
'<cluster_property_set id=\"cib-bootstrap-options\"><nvpair \
id=\"cts-recheck-interval-setting\" name=\"cluster-recheck-interval\" \
                value=\"%s\"/></cluster_property_set>'",
-+            "ClearCheckInterval"    : "cibadmin --delete --xpath \
                \"//nvpair[@name='cluster-recheck-interval']\"",
-+
-             "MaintenanceModeOn"    : "cibadmin --modify -c --xml-text \
'<cluster_property_set id=\"cib-bootstrap-options\"><nvpair \
id=\"cts-maintenance-mode-setting\" name=\"maintenance-mode\" \
                value=\"true\"/></cluster_property_set>'",
-             "MaintenanceModeOff"    : "cibadmin --delete --xpath \
                \"//nvpair[@name='maintenance-mode']\"",
-
-@@ -291,6 +294,9 @@ class crm_cs_v0(BasePatterns):
-             r"error:.*Connection to cib_shm failed",
-             r"error:.*Connection to cib_shm.* closed",
-             r"error:.*STONITH connection failed",
-+            r"error: Connection to stonith-ng failed",
-+            r"crit: Fencing daemon connection failed",
-+            r"error: Connection to stonith-ng.* closed",
-             ]
-
-         self.components["corosync"] = [
-diff --git a/doc/Pacemaker_Explained/en-US/Ch-Stonith.txt \
                b/doc/Pacemaker_Explained/en-US/Ch-Stonith.txt
-index 02525d6..a3c02cb 100644
---- a/doc/Pacemaker_Explained/en-US/Ch-Stonith.txt
-+++ b/doc/Pacemaker_Explained/en-US/Ch-Stonith.txt
-@@ -343,7 +343,7 @@ http://www.clusterlabs.org/doc/[Clusters from Scratch] guide for \
                those details.
- # cibadmin -C -o resources --xml-file stonith.xml
- ----
-
--. Set stonith-enabled to true:
-+. Set +stonith-enabled+ to true:
- +
- ----
- # crm_attribute -t crm_config -n stonith-enabled -v true
-@@ -831,3 +831,29 @@ Put together, the configuration looks like this:
-   </configuration>
- </cib>
- ----
-+
-+== Remapping Reboots =-+
-+When the cluster needs to reboot a node, whether because +stonith-action+ is \
                +reboot+ or because
-+a reboot was manually requested (such as by `stonith_admin --reboot`), it will \
                remap that to
-+other commands in two cases:
-+
-+. If the chosen fencing device does not support the +reboot+ command, the cluster
-+  will ask it to perform +off+ instead.
-+
-+. If a fencing topology level with multiple devices must be executed, the cluster
-+  will ask all the devices to perform +off+, then ask the devices to perform +on+.
-+
-+To understand the second case, consider the example of a node with redundant
-+power supplies connected to intelligent power switches. Rebooting one switch
-+and then the other would have no effect on the node. Turning both switches off,
-+and then on, actually reboots the node.
-+
-+In such a case, the fencing operation will be treated as successful as long as
-+the +off+ commands succeed, because then it is safe for the cluster to recover
-+any resources that were on the node. Timeouts and errors in the +on+ phase will
-+be logged but ignored.
-+
-+When a reboot operation is remapped, any action-specific timeout for the
-+remapped action will be used (for example, +pcmk_off_timeout+ will be used when
-+executing the +off+ command, not +pcmk_reboot_timeout+).
-diff --git a/doc/asciidoc.reference b/doc/asciidoc.reference
-index a9a171b..9323864 100644
---- a/doc/asciidoc.reference
-+++ b/doc/asciidoc.reference
-@@ -1,31 +1,49 @@
-+= Single-chapter part of the documentation -+
-+== Go-to reference chapter for how we use AsciiDoc on this project =-+
-+[NOTE]
-+=====-+This is *not* an attempt for fully self-hosted AsciiDoc document,
-+consider it a plaintext full of AsciiDoc samples (it's up to the reader
-+to recognize the borderline) at documentation writers' disposal
-+to somewhat standardize the style{empty}footnote:[
-+  style of both source notation and final visual appearance
-+].
-+
- See also:
-    http://powerman.name/doc/asciidoc
-+=====-
--Commands:    `some-tool --with option`
--Files:       '/tmp/file.name'
--Italic:	     _some text_
-+Emphasis:    _some test_
- Mono:        +some text+
--Bold:        *some text*
--Super:	     ^some text^
--Sub:	     ~some text~
-+Strong:      *some text*
-+Super:       ^some text^
-+Sub:         ~some text~
- Quotes:
-              ``double quoted''
-               `single quoted'
-
--Tool:        command
-+Command:     `some-tool --with option`
-+Newly introduced term:
-+             'some text' (another form of emphasis as of this edit)
-+
-+File:        mono
- Literal:     mono
-+Tool:        command
-+Option:      mono
-+Replaceable: emphasis mono
- Varname:     mono
--Option:      italic
--Emphasis:    italic bold
--Replaceable: italic mono
-+Term encountered on system (e.g., menu choice, hostname):
-+             strong
-
-
--.Title for Eaxmple
-+.Title for Example
- ====- Some text
- ====-
--.Title for Eaxmple with XML Listing
-+.Title for Example with XML Listing
- ====- [source,XML]
- -----
-@@ -49,4 +67,4 @@ Section anchors:
-
- References to section anchors:
-
--<<s-name>> or <<s-name,Alternate Text>>
-\ No newline at end of file
-+<<s-name>> or <<s-name,Alternate Text>>
-diff --git a/doc/shared/en-US/pacemaker-intro.txt \
                b/doc/shared/en-US/pacemaker-intro.txt
-index bf432fc..6b898c9 100644
---- a/doc/shared/en-US/pacemaker-intro.txt
-+++ b/doc/shared/en-US/pacemaker-intro.txt
-@@ -1,41 +1,62 @@
-
--== What Is Pacemaker? =-+== What Is 'Pacemaker'? =-
--Pacemaker is a cluster resource manager.
-+Pacemaker is a 'cluster resource manager', that is, a logic responsible
-+for a life-cycle of deployed software -- indirectly perhaps even whole
-+systems or their interconnections -- under its control within a set of
-+computers (a.k.a. 'cluster nodes', 'nodes' for short) and driven by
-+prescribed rules.
-
- It achieves maximum availability for your cluster services
--(aka. resources) by detecting and recovering from node- and
-+(a.k.a. 'resources') by detecting and recovering from node- and
- resource-level failures by making use of the messaging and membership
- capabilities provided by your preferred cluster infrastructure (either
- http://www.corosync.org/[Corosync] or
--http://linux-ha.org/wiki/Heartbeat[Heartbeat]).
-+http://linux-ha.org/wiki/Heartbeat[Heartbeat]), and possibly by
-+utilizing other parts of the overall cluster stack.
-+
-+.High Availability Clusters
-+[NOTE]
-+For *the goal of minimal downtime* a term 'high availability' was coined
-+and together with its acronym, 'HA', is well-established in the sector.
-+To differentiate this sort of clusters from high performance computing
-+('HPC') ones, should a context require it (apparently, not the case in
-+this document), using 'HA cluster' is an option.
-
- Pacemaker's key features include:
-
-  * Detection and recovery of node and service-level failures
-  * Storage agnostic, no requirement for shared storage
-  * Resource agnostic, anything that can be scripted can be clustered
-- * Supports fencing (aka. STONITH) for ensuring data integrity
-+ * Supports 'fencing' (also referred to as the 'STONITH' acronym,
-+   <<s-intro-stonith,deciphered>> later on) for ensuring data integrity
-  * Supports large and small clusters
-  * Supports both quorate and resource-driven clusters
-  * Supports practically any redundancy configuration
-- * Automatically replicated configuration that can be updated from any node
-- * Ability to specify cluster-wide service ordering, colocation and anti-colocation
-+ * Automatically replicated configuration that can be updated
-+   from any node
-+ * Ability to specify cluster-wide service ordering,
-+   colocation and anti-colocation
-  * Support for advanced service types
-  ** Clones: for services which need to be active on multiple nodes
-- ** Multi-state: for services with multiple modes (eg. master/slave, \
                primary/secondary)
-- * Unified, scriptable, cluster management tools.
-+ ** Multi-state: for services with multiple modes
-+    (e.g. master/slave, primary/secondary)
-+ * Unified, scriptable cluster management tools
-
- == Pacemaker Architecture =-
- At the highest level, the cluster is made up of three pieces:
-
-- * Non-cluster-aware components. These pieces
-+ * *Non-cluster-aware components*. These pieces
-    include the resources themselves; scripts that start, stop and
-    monitor them; and a local daemon that masks the differences
-    between the different standards these scripts implement.
-+   Even though interactions of these resources when run as multiple
-+   instances can resemble a distributed system, they still lack
-+   the proper HA mechanisms and/or autonomous cluster-wide governance
-+   as subsumed in the following item.
-
-- * Resource management. Pacemaker provides the brain that processes
-+ * *Resource management*. Pacemaker provides the brain that processes
-    and reacts to events regarding the cluster.  These events include
-    nodes joining or leaving the cluster; resource events caused by
-    failures, maintenance and scheduled activities; and other
-@@ -44,21 +65,24 @@ At the highest level, the cluster is made up of three pieces:
-    events. This may include moving resources, stopping nodes and even
-    forcing them offline with remote power switches.
-
-- * Low-level infrastructure. Projects like Corosync, CMAN and
--   Heartbeat provide reliable messaging, membership and quorum
-+ * *Low-level infrastructure*. Projects like 'Corosync', 'CMAN' and
-+   'Heartbeat' provide reliable messaging, membership and quorum
-    information about the cluster.
-
- When combined with Corosync, Pacemaker also supports popular open
--source cluster filesystems.
--footnote:[Even though Pacemaker also supports Heartbeat, the filesystems need
--to use the stack for messaging and membership, and Corosync seems to be
--what they're standardizing on. Technically, it would be possible for them to
--support Heartbeat as well, but there seems little interest in this.]
-+source cluster filesystems.{empty}footnote:[
-+  Even though Pacemaker also supports Heartbeat, the filesystems need to
-+  use the stack for messaging and membership, and Corosync seems to be
-+  what they're standardizing on.  Technically, it would be possible for
-+  them to support Heartbeat as well, but there seems little interest
-+  in this.
-+]
-
- Due to past standardization within the cluster filesystem community,
--cluster filesystems make use of a common distributed lock manager, which makes
--use of Corosync for its messaging and membership capabilities (which nodes
--are up/down) and Pacemaker for fencing services.
-+cluster filesystems make use of a common 'distributed lock manager',
-+which makes use of Corosync for its messaging and membership
-+capabilities (which nodes are up/down) and Pacemaker for fencing
-+services.
-
- .The Pacemaker Stack
- image::images/pcmk-stack.png["The Pacemaker \
                stack",width="10cm",height="7.5cm",align="center"]
-@@ -67,75 +91,79 @@ image::images/pcmk-stack.png["The Pacemaker \
                stack",width="10cm",height="7.5cm",a
-
- Pacemaker itself is composed of five key components:
-
-- * Cluster Information Base (CIB)
-- * Cluster Resource Management daemon (CRMd)
-- * Local Resource Management daemon (LRMd)
-- * Policy Engine (PEngine or PE)
-- * Fencing daemon (STONITHd)
-+ * 'Cluster Information Base' ('CIB')
-+ * 'Cluster Resource Management daemon' ('CRMd')
-+ * 'Local Resource Management daemon' ('LRMd')
-+ * 'Policy Engine' ('PEngine' or 'PE')
-+ * Fencing daemon ('STONITHd')
-
- .Internal Components
- image::images/pcmk-internals.png["Subsystems of a Pacemaker \
                cluster",align="center",scaledwidth="65%"]
-
- The CIB uses XML to represent both the cluster's configuration and
- current state of all resources in the cluster. The contents of the CIB
--are automatically kept in sync across the entire cluster and are used
--by the PEngine to compute the ideal state of the cluster and how it
--should be achieved.
-+are automatically kept in sync across the entire cluster and are used by
-+the PEngine to compute the ideal state of the cluster and how it should
-+be achieved.
-
--This list of instructions is then fed to the Designated
--Controller (DC).  Pacemaker centralizes all cluster decision making by
--electing one of the CRMd instances to act as a master. Should the
--elected CRMd process (or the node it is on) fail, a new one is
--quickly established.
-+This list of instructions is then fed to the 'Designated Controller'
-+('DC').  Pacemaker centralizes all cluster decision making by electing
-+one of the CRMd instances to act as a master. Should the elected CRMd
-+process (or the node it is on) fail, a new one is quickly established.
-
- The DC carries out the PEngine's instructions in the required order by
- passing them to either the Local Resource Management daemon (LRMd) or
- CRMd peers on other nodes via the cluster messaging infrastructure
- (which in turn passes them on to their LRMd process).
-
--The peer nodes all report the results of their operations back to the
--DC and, based on the expected and actual results, will either execute
--any actions that needed to wait for the previous one to complete, or
--abort processing and ask the PEngine to recalculate the ideal cluster
--state based on the unexpected results.
-+The peer nodes all report the results of their operations back to the DC
-+and, based on the expected and actual results, will either execute any
-+actions that needed to wait for the previous one to complete, or abort
-+processing and ask the PEngine to recalculate the ideal cluster state
-+based on the unexpected results.
-
- In some cases, it may be necessary to power off nodes in order to
- protect shared data or complete resource recovery. For this, Pacemaker
- comes with STONITHd.
-
--STONITH is an acronym for Shoot-The-Other-Node-In-The-Head and is
--usually implemented with a remote power switch.
-+[[s-intro-stonith]]
-+.STONITH
-+[NOTE]
-+*STONITH* is an acronym for 'Shoot-The-Other-Node-In-The-Head',
-+a recommended practice that misbehaving node is best to be promptly
-+'fenced' (shut off, cut from shared resources or otherwise immobilized),
-+and is usually implemented with a remote power switch.
-
- In Pacemaker, STONITH devices are modeled as resources (and configured
- in the CIB) to enable them to be easily monitored for failure, however
--STONITHd takes care of understanding the STONITH topology such that
--its clients simply request a node be fenced, and it does the rest.
-+STONITHd takes care of understanding the STONITH topology such that its
-+clients simply request a node be fenced, and it does the rest.
-
- == Types of Pacemaker Clusters =-
- Pacemaker makes no assumptions about your environment. This allows it
- to support practically any
- http://en.wikipedia.org/wiki/High-availability_cluster#Node_configurations[redundancy
                
--configuration] including Active/Active, Active/Passive, N+1, N+M,
--N-to-1 and N-to-N.
-+configuration] including 'Active/Active', 'Active/Passive', 'N+1',
-+'N+M', 'N-to-1' and 'N-to-N'.
-
- .Active/Passive Redundancy
- image::images/pcmk-active-passive.png["Active/Passive \
                Redundancy",width="10cm",height="7.5cm",align="center"]
-
--Two-node Active/Passive clusters using Pacemaker and DRBD are a
--cost-effective solution for many High Availability situations.
-+Two-node Active/Passive clusters using Pacemaker and 'DRBD' are
-+a cost-effective solution for many High Availability situations.
-
- .Shared Failover
- image::images/pcmk-shared-failover.png["Shared \
                Failover",width="10cm",height="7.5cm",align="center"]
-
- By supporting many nodes, Pacemaker can dramatically reduce hardware
- costs by allowing several active/passive clusters to be combined and
--share a common backup node
-+share a common backup node.
-
- .N to N Redundancy
- image::images/pcmk-active-active.png["N to N \
                Redundancy",width="10cm",height="7.5cm",align="center"]
-
--When shared storage is available, every node can potentially be used
--for failover.  Pacemaker can even run multiple copies of services to
--spread out the workload.
-+When shared storage is available, every node can potentially be used for
-+failover.  Pacemaker can even run multiple copies of services to spread
-+out the workload.
-
-diff --git a/extra/resources/Dummy b/extra/resources/Dummy
-index aec2a0c..8a38ef5 100644
---- a/extra/resources/Dummy
-+++ b/extra/resources/Dummy
-@@ -137,7 +137,7 @@ dummy_stop() {
-     if [ $? =  $OCF_SUCCESS ]; then
- 	rm ${OCF_RESKEY_state}
-     fi
--    rm ${VERIFY_SERIALIZED_FILE}
-+    rm -f ${VERIFY_SERIALIZED_FILE}
-     return $OCF_SUCCESS
- }
-
-diff --git a/extra/resources/ping b/extra/resources/ping
-index e7b9973..ca9db75 100755
---- a/extra/resources/ping
-+++ b/extra/resources/ping
-@@ -43,8 +43,7 @@ meta_data() {
- <version>1.0</version>
-
- <longdesc lang="en">
--Every time the monitor action is run, this resource agent records (in the CIB) the \
                current number of ping nodes the host can connect to.
--It is essentially the same as pingd except that it uses the system ping tool to \
                obtain the results.
-+Every time the monitor action is run, this resource agent records (in the CIB) the \
current number of nodes the host can connect to using the system fping (preferred) or \
                ping tool.
- </longdesc>
- <shortdesc lang="en">node connectivity</shortdesc>
-
-diff --git a/fencing/README.md b/fencing/README.md
-new file mode 100644
-index 0000000..a50c69b
---- /dev/null
-+++ b/fencing/README.md
-@@ -0,0 +1,145 @@
-+# Directory contents
-+
-+* `admin.c`, `stonith_admin.8`: `stonith_admin` command-line tool and its man
-+  page
-+* `commands.c`, `internal.h`, `main.c`, `remote.c`, `stonithd.7`: stonithd and
-+  its man page
-+* `fence_dummy`, `fence_legacy`, `fence_legacy.8`, `fence_pcmk`,
-+  `fence_pcmk.8`: Pacemaker-supplied fence agents and their man pages
-+* `regression.py(.in)`: regression tests for `stonithd`
-+* `standalone_config.c`, `standalone_config.h`: abandoned project
-+* `test.c`: `stonith-test` command-line tool
-+
-+# How fencing requests are handled
-+
-+## Bird's eye view
-+
-+In the broadest terms, stonith works like this:
-+
-+1. The initiator (an external program such as `stonith_admin`, or the cluster
-+   itself via the `crmd`) asks the local `stonithd`, "Hey, can you fence this
-+   node?"
-+1. The local `stonithd` asks all the `stonithd's` in the cluster (including
-+   itself), "Hey, what fencing devices do you have access to that can fence
-+   this node?"
-+1. Each `stonithd` in the cluster replies with a list of available devices that
-+   it knows about.
-+1. Once the original `stonithd` gets all the replies, it asks the most
-+   appropriate `stonithd` peer to actually carry out the fencing. It may send
-+   out more than one such request if the target node must be fenced with
-+   multiple devices.
-+1. The chosen `stonithd(s)` call the appropriate fencing resource agent(s) to
-+   do the fencing, then replies to the original `stonithd` with the result.
-+1. The original `stonithd` broadcasts the result to all `stonithd's`.
-+1. Each `stonithd` sends the result to each of its local clients (including, at
-+   some point, the initiator).
-+
-+## Detailed view
-+
-+### Initiating a fencing request
-+
-+A fencing request can be initiated by the cluster or externally, using the
-+libfencing API.
-+
-+* The cluster always initiates fencing via `crmd/te_actions.c:te_fence_node()`
-+  (which calls the `fence()` API). This occurs when a graph synapse contains a
-+  `CRM_OP_FENCE` XML operation.
-+* The main external clients are `stonith_admin` and `stonith-test`.
-+
-+Highlights of the fencing API:
-+* `stonith_api_new()` creates and returns a new `stonith_t` object, whose
-+  `cmds` member has methods for connect, disconnect, fence, etc.
-+* the `fence()` method creates and sends a `STONITH_OP_FENCE XML` request with
-+  the desired action and target node. Callers do not have to choose or even
-+  have any knowledge about particular fencing devices.
-+
-+### Fencing queries
-+
-+The function calls for a stonith request go something like this as of this writing:
-+
-+The local `stonithd` receives the client's request via an IPC or messaging
-+layer callback, which calls
-+* `stonith_command()`, which (for requests) calls
-+  * `handle_request()`, which (for `STONITH_OP_FENCE` from a client) calls
-+    * `initiate_remote_stonith_op()`, which creates a `STONITH_OP_QUERY` XML
-+      request with the target, desired action, timeout, etc.. then broadcasts
-+      the operation to the cluster group (i.e. all `stonithd` instances) and
-+      starts a timer. The query is broadcast because (1) location constraints
-+      might prevent the local node from accessing the stonith device directly,
-+      and (2) even if the local node does have direct access, another node
-+      might be preferred to carry out the fencing.
-+
-+Each `stonithd` receives the original `stonithd's STONITH_OP_QUERY` broadcast
-+request via IPC or messaging layer callback, which calls:
-+* `stonith_command()`, which (for requests) calls
-+  *  `handle_request()`, which (for `STONITH_OP_QUERY` from a peer) calls
-+    * `stonith_query()`, which calls
-+      * `get_capable_devices()` with `stonith_query_capable_device_db()` to add
-+        device information to an XML reply and send it. (A message is
-+	considered a reply if it contains `T_STONITH_REPLY`, which is only set
-+        by `stonithd` peers, not clients.)
-+
-+The original `stonithd` receives all peers' `STONITH_OP_QUERY` replies via IPC
-+or messaging layer callback, which calls:
-+* `stonith_command()`, which (for replies) calls
-+  * `handle_reply()` which (for `STONITH_OP_QUERY`) calls
-+    * `process_remote_stonith_query()`, which allocates a new query result
-+      structure, parses device information into it, and adds it to operation
-+      object. It increments the number of replies received for this operation,
-+      and compares it against the expected number of replies (i.e. the number
-+      of active peers), and if this is the last expected reply, calls
-+      * `call_remote_stonith()`, which calculates the timeout and sends
-+        `STONITH_OP_FENCE` request(s) to carry out the fencing. If the target
-+	node has a fencing "topology" (which allows specifications such as
-+	"this node can be fenced either with device A, or devices B and C in
-+	combination"), it will choose the device(s), and send out as many
-+	requests as needed. If it chooses a device, it will choose the peer; a
-+	peer is preferred if it has "verified" access to the desired device,
-+	meaning that it has the device "running" on it and thus has a monitor
-+        operation ensuring reachability.
-+
-+### Fencing operations
-+
-+Each `STONITH_OP_FENCE` request goes something like this as of this writing:
-+
-+The chosen peer `stonithd` receives the `STONITH_OP_FENCE` request via IPC or
-+messaging layer callback, which calls:
-+* `stonith_command()`, which (for requests) calls
-+  * `handle_request()`, which (for `STONITH_OP_FENCE` from a peer) calls
-+    * `stonith_fence()`, which calls
-+      * `schedule_stonith_command()` (using supplied device if
-+        `F_STONITH_DEVICE` was set, otherwise the highest-priority capable
-+	device obtained via `get_capable_devices()` with
-+	`stonith_fence_get_devices_cb()`), which adds the operation to the
-+        device's pending operations list and triggers processing.
-+
-+The chosen peer `stonithd's` mainloop is triggered and calls
-+* `stonith_device_dispatch()`, which calls
-+  * `stonith_device_execute()`, which pops off the next item from the device's
-+    pending operations list. If acting as the (internally implemented) watchdog
-+    agent, it panics the node, otherwise it calls
-+    * `stonith_action_create()` and `stonith_action_execute_async()` to call the \
                fencing agent.
-+
-+The chosen peer stonithd's mainloop is triggered again once the fencing agent \
                returns, and calls
-+* `stonith_action_async_done()` which adds the results to an action object then \
                calls its
-+  * done callback (`st_child_done()`), which calls `schedule_stonith_command()`
-+    for a new device if there are further required actions to execute or if the
-+    original action failed, then builds and sends an XML reply to the original
-+    `stonithd` (via `stonith_send_async_reply()`), then checks whether any
-+    pending actions are the same as the one just executed and merges them if so.
-+
-+### Fencing replies
-+
-+The original `stonithd` receives the `STONITH_OP_FENCE` reply via IPC or
-+messaging layer callback, which calls:
-+* `stonith_command()`, which (for replies) calls
-+  * `handle_reply()`, which calls
-+    * `process_remote_stonith_exec()`, which calls either
-+      `call_remote_stonith()` (to retry a failed operation, or try the next
-+       device in a topology is appropriate, which issues a new
-+      `STONITH_OP_FENCE` request, proceeding as before) or `remote_op_done()`
-+      (if the operation is definitively failed or successful).
-+      * remote_op_done() broadcasts the result to all peers.
-+
-+Finally, all peers receive the broadcast result and call
-+* `remote_op_done()`, which sends the result to all local clients.
-diff --git a/fencing/commands.c b/fencing/commands.c
-index c9975d3..0d2d614 100644
---- a/fencing/commands.c
-+++ b/fencing/commands.c
-@@ -53,15 +53,24 @@ GHashTable *topology = NULL;
- GList *cmd_list = NULL;
-
- struct device_search_s {
-+    /* target of fence action */
-     char *host;
-+    /* requested fence action */
-     char *action;
-+    /* timeout to use if a device is queried dynamically for possible targets */
-     int per_device_timeout;
-+    /* number of registered fencing devices at time of request */
-     int replies_needed;
-+    /* number of device replies received so far */
-     int replies_received;
-+    /* whether the target is eligible to perform requested action (or off) */
-     bool allow_suicide;
-
-+    /* private data to pass to search callback function */
-     void *user_data;
-+    /* function to call when all replies have been received */
-     void (*callback) (GList * devices, void *user_data);
-+    /* devices capable of performing requested action (or off if remapping) */
-     GListPtr capable;
- };
-
-@@ -173,6 +182,17 @@ get_action_timeout(stonith_device_t * device, const char \
                *action, int default_ti
-         char buffer[64] = { 0, };
-         const char *value = NULL;
-
-+        /* If "reboot" was requested but the device does not support it,
-+         * we will remap to "off", so check timeout for "off" instead
-+         */
-+        if (safe_str_eq(action, "reboot")
-+            && is_not_set(device->flags, st_device_supports_reboot)) {
-+            crm_trace("%s doesn't support reboot, using timeout for off instead",
-+                      device->id);
-+            action = "off";
-+        }
-+
-+        /* If the device config specified an action-specific timeout, use it */
-         snprintf(buffer, sizeof(buffer) - 1, "pcmk_%s_timeout", action);
-         value = g_hash_table_lookup(device->params, buffer);
-         if (value) {
-@@ -1241,6 +1261,38 @@ search_devices_record_result(struct device_search_s *search, \
                const char *device,
-     }
- }
-
-+/*
-+ * \internal
-+ * \brief Check whether the local host is allowed to execute a fencing action
-+ *
-+ * \param[in] device         Fence device to check
-+ * \param[in] action         Fence action to check
-+ * \param[in] target         Hostname of fence target
-+ * \param[in] allow_suicide  Whether self-fencing is allowed for this operation
-+ *
-+ * \return TRUE if local host is allowed to execute action, FALSE otherwise
-+ */
-+static gboolean
-+localhost_is_eligible(const stonith_device_t *device, const char *action,
-+                      const char *target, gboolean allow_suicide)
-+{
-+    gboolean localhost_is_target = safe_str_eq(target, stonith_our_uname);
-+
-+    if (device && action && device->on_target_actions
-+        && strstr(device->on_target_actions, action)) {
-+        if (!localhost_is_target) {
-+            crm_trace("%s operation with %s can only be executed for localhost not \
                %s",
-+                      action, device->id, target);
-+            return FALSE;
-+        }
-+
-+    } else if (localhost_is_target && !allow_suicide) {
-+        crm_trace("%s operation does not support self-fencing", action);
-+        return FALSE;
-+    }
-+    return TRUE;
-+}
-+
- static void
- can_fence_host_with_device(stonith_device_t * dev, struct device_search_s *search)
- {
-@@ -1258,19 +1310,20 @@ can_fence_host_with_device(stonith_device_t * dev, struct \
                device_search_s *searc
-         goto search_report_results;
-     }
-
--    if (dev->on_target_actions &&
--        search->action &&
--        strstr(dev->on_target_actions, search->action)) {
--        /* this device can only execute this action on the target node */
--
--        if(safe_str_neq(host, stonith_our_uname)) {
--            crm_trace("%s operation with %s can only be executed for localhost not \
                %s",
--                      search->action, dev->id, host);
-+    /* Short-circuit query if this host is not allowed to perform the action */
-+    if (safe_str_eq(search->action, "reboot")) {
-+        /* A "reboot" *might* get remapped to "off" then "on", so short-circuit
-+         * only if all three are disallowed. If only one or two are disallowed,
-+         * we'll report that with the results. We never allow suicide for
-+         * remapped "on" operations because the host is off at that point.
-+         */
-+        if (!localhost_is_eligible(dev, "reboot", host, search->allow_suicide)
-+            && !localhost_is_eligible(dev, "off", host, search->allow_suicide)
-+            && !localhost_is_eligible(dev, "on", host, FALSE)) {
-             goto search_report_results;
-         }
--
--    } else if(safe_str_eq(host, stonith_our_uname) && search->allow_suicide == \
                FALSE) {
--        crm_trace("%s operation does not support self-fencing", search->action);
-+    } else if (!localhost_is_eligible(dev, search->action, host,
-+                                      search->allow_suicide)) {
-         goto search_report_results;
-     }
-
-@@ -1423,6 +1476,85 @@ struct st_query_data {
-     int call_options;
- };
-
-+/*
-+ * \internal
-+ * \brief Add action-specific attributes to query reply XML
-+ *
-+ * \param[in,out] xml     XML to add attributes to
-+ * \param[in]     action  Fence action
-+ * \param[in]     device  Fence device
-+ */
-+static void
-+add_action_specific_attributes(xmlNode *xml, const char *action,
-+                               stonith_device_t *device)
-+{
-+    int action_specific_timeout;
-+    int delay_max;
-+
-+    CRM_CHECK(xml && action && device, return);
-+
-+    if (is_action_required(action, device)) {
-+        crm_trace("Action %s is required on %s", action, device->id);
-+        crm_xml_add_int(xml, F_STONITH_DEVICE_REQUIRED, 1);
-+    }
-+
-+    action_specific_timeout = get_action_timeout(device, action, 0);
-+    if (action_specific_timeout) {
-+        crm_trace("Action %s has timeout %dms on %s",
-+                  action, action_specific_timeout, device->id);
-+        crm_xml_add_int(xml, F_STONITH_ACTION_TIMEOUT, action_specific_timeout);
-+    }
-+
-+    delay_max = get_action_delay_max(device, action);
-+    if (delay_max > 0) {
-+        crm_trace("Action %s has maximum random delay %dms on %s",
-+                  action, delay_max, device->id);
-+        crm_xml_add_int(xml, F_STONITH_DELAY_MAX, delay_max / 1000);
-+    }
-+}
-+
-+/*
-+ * \internal
-+ * \brief Add "disallowed" attribute to query reply XML if appropriate
-+ *
-+ * \param[in,out] xml            XML to add attribute to
-+ * \param[in]     action         Fence action
-+ * \param[in]     device         Fence device
-+ * \param[in]     target         Fence target
-+ * \param[in]     allow_suicide  Whether self-fencing is allowed
-+ */
-+static void
-+add_disallowed(xmlNode *xml, const char *action, stonith_device_t *device,
-+               const char *target, gboolean allow_suicide)
-+{
-+    if (!localhost_is_eligible(device, action, target, allow_suicide)) {
-+        crm_trace("Action %s on %s is disallowed for local host",
-+                  action, device->id);
-+        crm_xml_add(xml, F_STONITH_ACTION_DISALLOWED, XML_BOOLEAN_TRUE);
-+    }
-+}
-+
-+/*
-+ * \internal
-+ * \brief Add child element with action-specific values to query reply XML
-+ *
-+ * \param[in,out] xml            XML to add attribute to
-+ * \param[in]     action         Fence action
-+ * \param[in]     device         Fence device
-+ * \param[in]     target         Fence target
-+ * \param[in]     allow_suicide  Whether self-fencing is allowed
-+ */
-+static void
-+add_action_reply(xmlNode *xml, const char *action, stonith_device_t *device,
-+               const char *target, gboolean allow_suicide)
-+{
-+    xmlNode *child = create_xml_node(xml, F_STONITH_ACTION);
-+
-+    crm_xml_add(child, XML_ATTR_ID, action);
-+    add_action_specific_attributes(child, action, device);
-+    add_disallowed(child, action, device, target, allow_suicide);
-+}
-+
- static void
- stonith_query_capable_device_cb(GList * devices, void *user_data)
- {
-@@ -1432,13 +1564,12 @@ stonith_query_capable_device_cb(GList * devices, void \
                *user_data)
-     xmlNode *list = NULL;
-     GListPtr lpc = NULL;
-
--    /* Pack the results into data */
-+    /* Pack the results into XML */
-     list = create_xml_node(NULL, __FUNCTION__);
-     crm_xml_add(list, F_STONITH_TARGET, query->target);
-     for (lpc = devices; lpc != NULL; lpc = lpc->next) {
-         stonith_device_t *device = g_hash_table_lookup(device_list, lpc->data);
--        int action_specific_timeout;
--        int delay_max;
-+        const char *action = query->action;
-
-         if (!device) {
-             /* It is possible the device got unregistered while
-@@ -1448,24 +1579,44 @@ stonith_query_capable_device_cb(GList * devices, void \
                *user_data)
-
-         available_devices++;
-
--        action_specific_timeout = get_action_timeout(device, query->action, 0);
-         dev = create_xml_node(list, F_STONITH_DEVICE);
-         crm_xml_add(dev, XML_ATTR_ID, device->id);
-         crm_xml_add(dev, "namespace", device->namespace);
-         crm_xml_add(dev, "agent", device->agent);
-         crm_xml_add_int(dev, F_STONITH_DEVICE_VERIFIED, device->verified);
--        if (is_action_required(query->action, device)) {
--            crm_xml_add_int(dev, F_STONITH_DEVICE_REQUIRED, 1);
--        }
--        if (action_specific_timeout) {
--            crm_xml_add_int(dev, F_STONITH_ACTION_TIMEOUT, \
                action_specific_timeout);
-+
-+        /* If the originating stonithd wants to reboot the node, and we have a
-+         * capable device that doesn't support "reboot", remap to "off" instead.
-+         */
-+        if (is_not_set(device->flags, st_device_supports_reboot)
-+            && safe_str_eq(query->action, "reboot")) {
-+            crm_trace("%s doesn't support reboot, using values for off instead",
-+                      device->id);
-+            action = "off";
-         }
-
--        delay_max = get_action_delay_max(device, query->action);
--        if (delay_max > 0) {
--            crm_xml_add_int(dev, F_STONITH_DELAY_MAX, delay_max / 1000);
-+        /* Add action-specific values if available */
-+        add_action_specific_attributes(dev, action, device);
-+        if (safe_str_eq(query->action, "reboot")) {
-+            /* A "reboot" *might* get remapped to "off" then "on", so after
-+             * sending the "reboot"-specific values in the main element, we add
-+             * sub-elements for "off" and "on" values.
-+             *
-+             * We short-circuited earlier if "reboot", "off" and "on" are all
-+             * disallowed for the local host. However if only one or two are
-+             * disallowed, we send back the results and mark which ones are
-+             * disallowed. If "reboot" is disallowed, this might cause problems
-+             * with older stonithd versions, which won't check for it. Older
-+             * versions will ignore "off" and "on", so they are not a problem.
-+             */
-+            add_disallowed(dev, action, device, query->target,
-+                           is_set(query->call_options, st_opt_allow_suicide));
-+            add_action_reply(dev, "off", device, query->target,
-+                             is_set(query->call_options, st_opt_allow_suicide));
-+            add_action_reply(dev, "on", device, query->target, FALSE);
-         }
-
-+        /* A query without a target wants device parameters */
-         if (query->target == NULL) {
-             xmlNode *attrs = create_xml_node(dev, XML_TAG_ATTRS);
-
-@@ -1481,7 +1632,7 @@ stonith_query_capable_device_cb(GList * devices, void \
                *user_data)
-     }
-
-     if (list != NULL) {
--        crm_trace("Attaching query list output");
-+        crm_log_xml_trace(list, "Add query results");
-         add_message_xml(query->reply, F_STONITH_CALLDATA, list);
-     }
-     stonith_send_reply(query->reply, query->call_options, query->remote_peer, \
                query->client_id);
-@@ -1766,6 +1917,14 @@ st_child_done(GPid pid, int rc, const char *output, gpointer \
                user_data)
-             continue;
-         }
-
-+        /* Duplicate merging will do the right thing for either type of remapped
-+         * reboot. If the executing stonithd remapped an unsupported reboot to
-+         * off, then cmd->action will be reboot and will be merged with any
-+         * other reboot requests. If the originating stonithd remapped a
-+         * topology reboot to off then on, we will get here once with
-+         * cmd->action "off" and once with "on", and they will be merged
-+         * separately with similar requests.
-+         */
-         crm_notice
-             ("Merging stonith action %s for node %s originating from client %s with \
                identical stonith request from client %s",
-              cmd_other->action, cmd_other->victim, cmd_other->client_name, \
                cmd->client_name);
-diff --git a/fencing/internal.h b/fencing/internal.h
-index 46bd3bf..5fb8f9c 100644
---- a/fencing/internal.h
-+++ b/fencing/internal.h
-@@ -51,6 +51,17 @@ typedef struct stonith_device_s {
-     gboolean api_registered;
- } stonith_device_t;
-
-+/* These values are used to index certain arrays by "phase". Usually an
-+ * operation has only one "phase", so phase is always zero. However, some
-+ * reboots are remapped to "off" then "on", in which case "reboot" will be
-+ * phase 0, "off" will be phase 1 and "on" will be phase 2.
-+ */
-+enum st_remap_phase {
-+    st_phase_requested = 0,
-+    st_phase_off = 1,
-+    st_phase_on = 2
-+};
-+
- typedef struct remote_fencing_op_s {
-     /* The unique id associated with this operation */
-     char *id;
-@@ -97,7 +108,7 @@ typedef struct remote_fencing_op_s {
-     long long call_options;
-
-     /*! The current state of the remote operation. This indicates
--     * what phase the op is in, query, exec, done, duplicate, failed. */
-+     * what stage the op is in, query, exec, done, duplicate, failed. */
-     enum op_state state;
-     /*! The node that owns the remote operation */
-     char *originator;
-@@ -114,10 +125,17 @@ typedef struct remote_fencing_op_s {
-
-     /*! The current topology level being executed */
-     guint level;
--
--    /*! List of required devices the topology must execute regardless of what
--     * topology level they exist at. */
--    GListPtr required_list;
-+    /*! The current operation phase being executed */
-+    enum st_remap_phase phase;
-+
-+    /* For phase 0 or 1 (requested action or a remapped "off"), required devices
-+     * will be executed regardless of what topology level is being executed
-+     * currently. For phase 1 (remapped "on"), required devices will not be
-+     * attempted, because the cluster will execute them automatically when the
-+     * node next joins the cluster.
-+     */
-+    /*! Lists of devices marked as required for each phase */
-+    GListPtr required_list[3];
-     /*! The device list of all the devices at the current executing topology level. \
                */
-     GListPtr devices_list;
-     /*! Current entry in the topology device list */
-@@ -129,6 +147,20 @@ typedef struct remote_fencing_op_s {
-
- } remote_fencing_op_t;
-
-+/*
-+ * Complex fencing requirements are specified via fencing topologies.
-+ * A topology consists of levels; each level is a list of fencing devices.
-+ * Topologies are stored in a hash table by node name. When a node needs to be
-+ * fenced, if it has an entry in the topology table, the levels are tried
-+ * sequentially, and the devices in each level are tried sequentially.
-+ * Fencing is considered successful as soon as any level succeeds;
-+ * a level is considered successful if all its devices succeed.
-+ * Essentially, all devices at a given level are "and-ed" and the
-+ * levels are "or-ed".
-+ *
-+ * This structure is used for the topology table entries.
-+ * Topology levels start from 1, so levels[0] is unused and always NULL.
-+ */
- typedef struct stonith_topology_s {
-     char *node;
-     GListPtr levels[ST_LEVEL_MAX];
-diff --git a/fencing/main.c b/fencing/main.c
-index a499175..46d7352 100644
---- a/fencing/main.c
-+++ b/fencing/main.c
-@@ -1234,7 +1234,7 @@ struct qb_ipcs_service_handlers ipc_callbacks = {
- static void
- st_peer_update_callback(enum crm_status_type type, crm_node_t * node, const void \
                *data)
- {
--    if (type == crm_status_uname) {
-+    if (type != crm_status_processes) {
-         /*
-          * This is a hack until we can send to a nodeid and/or we fix node name \
                lookups
-          * These messages are ignored in stonith_peer_callback()
-diff --git a/fencing/regression.py.in b/fencing/regression.py.in
-index fe6d418..b4e6f08 100644
---- a/fencing/regression.py.in
-+++ b/fencing/regression.py.in
-@@ -23,861 +23,937 @@ import shlex
- import time
-
- def output_from_command(command):
--	test = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, \
                stderr=subprocess.PIPE)
--	test.wait()
-+    test = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, \
                stderr=subprocess.PIPE)
-+    test.wait()
-
--	return test.communicate()[0].split("\n")
-+    return test.communicate()[0].split("\n")
-
- class Test:
--	def __init__(self, name, description, verbose = 0, with_cpg = 0):
--		self.name = name
--		self.description = description
--		self.cmds = []
--		self.verbose = verbose
-+    def __init__(self, name, description, verbose = 0, with_cpg = 0):
-+        self.name = name
-+        self.description = description
-+        self.cmds = []
-+        self.verbose = verbose
-
--		self.result_txt = ""
--		self.cmd_tool_output = ""
--		self.result_exitcode = 0;
-+        self.result_txt = ""
-+        self.cmd_tool_output = ""
-+        self.result_exitcode = 0;
-
--		self.stonith_options = "-s"
--		self.enable_corosync = 0
-+        self.stonith_options = "-s"
-+        self.enable_corosync = 0
-
--		if with_cpg:
--			self.stonith_options = "-c"
--			self.enable_corosync = 1
-+        if with_cpg:
-+            self.stonith_options = "-c"
-+            self.enable_corosync = 1
-
--		self.stonith_process = None
--		self.stonith_output = ""
--		self.stonith_patterns = []
--		self.negative_stonith_patterns = []
-+        self.stonith_process = None
-+        self.stonith_output = ""
-+        self.stonith_patterns = []
-+        self.negative_stonith_patterns = []
-
--		self.executed = 0
-+        self.executed = 0
-
--		rsc_classes = output_from_command("crm_resource --list-standards")
-+        rsc_classes = output_from_command("crm_resource --list-standards")
-
--	def __new_cmd(self, cmd, args, exitcode, stdout_match = "", no_wait = 0, \
                stdout_negative_match = "", kill=None):
--		self.cmds.append(
--			{
--				"cmd" : cmd,
--				"kill" : kill,
--				"args" : args,
--				"expected_exitcode" : exitcode,
--				"stdout_match" : stdout_match,
--				"stdout_negative_match" : stdout_negative_match,
--				"no_wait" : no_wait,
--			}
--		)
-+    def __new_cmd(self, cmd, args, exitcode, stdout_match = "", no_wait = 0, \
                stdout_negative_match = "", kill=None):
-+        self.cmds.append(
-+            {
-+                "cmd" : cmd,
-+                "kill" : kill,
-+                "args" : args,
-+                "expected_exitcode" : exitcode,
-+                "stdout_match" : stdout_match,
-+                "stdout_negative_match" : stdout_negative_match,
-+                "no_wait" : no_wait,
-+            }
-+        )
-
--	def stop_pacemaker(self):
--		cmd = shlex.split("killall -9 -q pacemakerd")
--		test = subprocess.Popen(cmd, stdout=subprocess.PIPE)
--		test.wait()
-+    def stop_pacemaker(self):
-+        cmd = shlex.split("killall -9 -q pacemakerd")
-+        test = subprocess.Popen(cmd, stdout=subprocess.PIPE)
-+        test.wait()
-
--	def start_environment(self):
--		### make sure we are in full control here ###
--		self.stop_pacemaker()
-+    def start_environment(self):
-+        ### make sure we are in full control here ###
-+        self.stop_pacemaker()
-
--		cmd = shlex.split("killall -9 -q stonithd")
--		test = subprocess.Popen(cmd, stdout=subprocess.PIPE)
--		test.wait()
-+        cmd = shlex.split("killall -9 -q stonithd")
-+        test = subprocess.Popen(cmd, stdout=subprocess.PIPE)
-+        test.wait()
-
--		if self.verbose:
--			self.stonith_options = self.stonith_options + " -V"
--			print "Starting stonithd with %s" % self.stonith_options
-+        if self.verbose:
-+            self.stonith_options = self.stonith_options + " -V"
-+            print "Starting stonithd with %s" % self.stonith_options
-
--		if os.path.exists("/tmp/stonith-regression.log"):
--			os.remove('/tmp/stonith-regression.log')
-+        if os.path.exists("/tmp/stonith-regression.log"):
-+            os.remove('/tmp/stonith-regression.log')
-
--		self.stonith_process = subprocess.Popen(
--			shlex.split("@CRM_DAEMON_DIR@/stonithd %s -l /tmp/stonith-regression.log" % \
                self.stonith_options))
-+        self.stonith_process = subprocess.Popen(
-+            shlex.split("@CRM_DAEMON_DIR@/stonithd %s -l \
                /tmp/stonith-regression.log" % self.stonith_options))
-
--		time.sleep(1)
--
--	def clean_environment(self):
--		if self.stonith_process:
--			self.stonith_process.terminate()
--			self.stonith_process.wait()
--
--		self.stonith_output = ""
--		self.stonith_process = None
--
--		f = open('/tmp/stonith-regression.log', 'r')
--		for line in f.readlines():
--			self.stonith_output = self.stonith_output + line
--
--		if self.verbose:
--			print "Daemon Output Start"
--			print self.stonith_output
--			print "Daemon Output End"
--		os.remove('/tmp/stonith-regression.log')
--
--	def add_stonith_log_pattern(self, pattern):
--		self.stonith_patterns.append(pattern)
--
--	def add_stonith_negative_log_pattern(self, pattern):
--		self.negative_stonith_patterns.append(pattern)
--
--	def add_cmd(self, cmd, args):
--		self.__new_cmd(cmd, args, 0, "")
--
--	def add_cmd_no_wait(self, cmd, args):
--		self.__new_cmd(cmd, args, 0, "", 1)
--
--	def add_cmd_check_stdout(self, cmd, args, match, no_match = ""):
--		self.__new_cmd(cmd, args, 0, match, 0, no_match)
--
--	def add_expected_fail_cmd(self, cmd, args, exitcode = 255):
--		self.__new_cmd(cmd, args, exitcode, "")
--
--	def get_exitcode(self):
--		return self.result_exitcode
--
--	def print_result(self, filler):
--		print "%s%s" % (filler, self.result_txt)
--
--	def run_cmd(self, args):
--		cmd = shlex.split(args['args'])
--		cmd.insert(0, args['cmd'])
--
--		if self.verbose:
--			print "\n\nRunning: "+" ".join(cmd)
--		test = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
--
--		if args['kill']:
--			if self.verbose:
--				print "Also running: "+args['kill']
--			subprocess.Popen(shlex.split(args['kill']))
--
--		if args['no_wait'] == 0:
--			test.wait()
--		else:
--			return 0
--
--		output_res = test.communicate()
--		output = output_res[0] + output_res[1]
--
--		if self.verbose:
--			print output
--
--		if args['stdout_match'] != "" and output.count(args['stdout_match']) == 0:
--			test.returncode = -2
--			print "STDOUT string '%s' was not found in cmd output: %s" % \
                (args['stdout_match'], output)
--
--		if args['stdout_negative_match'] != "" and \
                output.count(args['stdout_negative_match']) != 0:
--			test.returncode = -2
--			print "STDOUT string '%s' was found in cmd output: %s" % \
                (args['stdout_negative_match'], output)
--
--		return test.returncode;
--
--
--	def count_negative_matches(self, outline):
--		count = 0
--		for line in self.negative_stonith_patterns:
--			if outline.count(line):
--				count = 1
--				if self.verbose:
--					print "This pattern should not have matched = '%s" % (line)
--		return count
--
--	def match_stonith_patterns(self):
--		negative_matches = 0
--		cur = 0
--		pats = self.stonith_patterns
--		total_patterns = len(self.stonith_patterns)
--
--		if len(self.stonith_patterns) == 0:
--			return
--
--		for line in self.stonith_output.split("\n"):
--			negative_matches = negative_matches + self.count_negative_matches(line)
--			if len(pats) == 0:
--				continue
--			cur = -1
--			for p in pats:
--				cur = cur + 1
--				if line.count(pats[cur]):
--					del pats[cur]
--					break
--
--		if len(pats) > 0 or negative_matches:
--			if self.verbose:
--				for p in pats:
--					print "Pattern Not Matched = '%s'" % p
--
--			self.result_txt = "FAILURE - '%s' failed. %d patterns out of %d not matched. %d \
                negative matches." % (self.name, len(pats), total_patterns, \
                negative_matches)
--			self.result_exitcode = -1
--
--	def run(self):
--		res = 0
--		i = 1
--		self.start_environment()
--
--		if self.verbose:
--			print "\n--- START TEST - %s" % self.name
--
--		self.result_txt = "SUCCESS - '%s'" % (self.name)
--		self.result_exitcode = 0
--		for cmd in self.cmds:
--			res = self.run_cmd(cmd)
--			if res != cmd['expected_exitcode']:
--				print "Step %d FAILED - command returned %d, expected %d" % (i, res, \
                cmd['expected_exitcode'])
--				self.result_txt = "FAILURE - '%s' failed at step %d. Command: %s %s" % \
                (self.name, i, cmd['cmd'], cmd['args'])
--				self.result_exitcode = -1
--				break
--			else:
--				if self.verbose:
--					print "Step %d SUCCESS" % (i)
--			i = i + 1
--		self.clean_environment()
--
--		if self.result_exitcode == 0:
--			self.match_stonith_patterns()
--
--		print self.result_txt
--		if self.verbose:
--			print "--- END TEST - %s\n" % self.name
--
--		self.executed = 1
--		return res
-+        time.sleep(1)
-+
-+    def clean_environment(self):
-+        if self.stonith_process:
-+            self.stonith_process.terminate()
-+            self.stonith_process.wait()
-+
-+        self.stonith_output = ""
-+        self.stonith_process = None
-+
-+        f = open('/tmp/stonith-regression.log', 'r')
-+        for line in f.readlines():
-+            self.stonith_output = self.stonith_output + line
-+
-+        if self.verbose:
-+            print "Daemon Output Start"
-+            print self.stonith_output
-+            print "Daemon Output End"
-+        os.remove('/tmp/stonith-regression.log')
-+
-+    def add_stonith_log_pattern(self, pattern):
-+        self.stonith_patterns.append(pattern)
-+
-+    def add_stonith_negative_log_pattern(self, pattern):
-+        self.negative_stonith_patterns.append(pattern)
-+
-+    def add_cmd(self, cmd, args):
-+        self.__new_cmd(cmd, args, 0, "")
-+
-+    def add_cmd_no_wait(self, cmd, args):
-+        self.__new_cmd(cmd, args, 0, "", 1)
-+
-+    def add_cmd_check_stdout(self, cmd, args, match, no_match = ""):
-+        self.__new_cmd(cmd, args, 0, match, 0, no_match)
-+
-+    def add_expected_fail_cmd(self, cmd, args, exitcode = 255):
-+        self.__new_cmd(cmd, args, exitcode, "")
-+
-+    def get_exitcode(self):
-+        return self.result_exitcode
-+
-+    def print_result(self, filler):
-+        print "%s%s" % (filler, self.result_txt)
-+
-+    def run_cmd(self, args):
-+        cmd = shlex.split(args['args'])
-+        cmd.insert(0, args['cmd'])
-+
-+        if self.verbose:
-+            print "\n\nRunning: "+" ".join(cmd)
-+        test = subprocess.Popen(cmd, stdout=subprocess.PIPE, \
                stderr=subprocess.PIPE)
-+
-+        if args['kill']:
-+            if self.verbose:
-+                print "Also running: "+args['kill']
-+            subprocess.Popen(shlex.split(args['kill']))
-+
-+        if args['no_wait'] == 0:
-+            test.wait()
-+        else:
-+            return 0
-+
-+        output_res = test.communicate()
-+        output = output_res[0] + output_res[1]
-+
-+        if self.verbose:
-+            print output
-+
-+        if args['stdout_match'] != "" and output.count(args['stdout_match']) == 0:
-+            test.returncode = -2
-+            print "STDOUT string '%s' was not found in cmd output: %s" % \
                (args['stdout_match'], output)
-+
-+        if args['stdout_negative_match'] != "" and \
                output.count(args['stdout_negative_match']) != 0:
-+            test.returncode = -2
-+            print "STDOUT string '%s' was found in cmd output: %s" % \
                (args['stdout_negative_match'], output)
-+
-+        return test.returncode;
-+
-+
-+    def count_negative_matches(self, outline):
-+        count = 0
-+        for line in self.negative_stonith_patterns:
-+            if outline.count(line):
-+                count = 1
-+                if self.verbose:
-+                    print "This pattern should not have matched = '%s" % (line)
-+        return count
-+
-+    def match_stonith_patterns(self):
-+        negative_matches = 0
-+        cur = 0
-+        pats = self.stonith_patterns
-+        total_patterns = len(self.stonith_patterns)
-+
-+        if len(self.stonith_patterns) == 0:
-+            return
-+
-+        for line in self.stonith_output.split("\n"):
-+            negative_matches = negative_matches + self.count_negative_matches(line)
-+            if len(pats) == 0:
-+                continue
-+            cur = -1
-+            for p in pats:
-+                cur = cur + 1
-+                if line.count(pats[cur]):
-+                    del pats[cur]
-+                    break
-+
-+        if len(pats) > 0 or negative_matches:
-+            if self.verbose:
-+                for p in pats:
-+                    print "Pattern Not Matched = '%s'" % p
-+
-+            self.result_txt = "FAILURE - '%s' failed. %d patterns out of %d not \
matched. %d negative matches." % (self.name, len(pats), total_patterns, \
                negative_matches)
-+            self.result_exitcode = -1
-+
-+    def run(self):
-+        res = 0
-+        i = 1
-+        self.start_environment()
-+
-+        if self.verbose:
-+            print "\n--- START TEST - %s" % self.name
-+
-+        self.result_txt = "SUCCESS - '%s'" % (self.name)
-+        self.result_exitcode = 0
-+        for cmd in self.cmds:
-+            res = self.run_cmd(cmd)
-+            if res != cmd['expected_exitcode']:
-+                print "Step %d FAILED - command returned %d, expected %d" % (i, \
                res, cmd['expected_exitcode'])
-+                self.result_txt = "FAILURE - '%s' failed at step %d. Command: %s \
                %s" % (self.name, i, cmd['cmd'], cmd['args'])
-+                self.result_exitcode = -1
-+                break
-+            else:
-+                if self.verbose:
-+                    print "Step %d SUCCESS" % (i)
-+            i = i + 1
-+        self.clean_environment()
-+
-+        if self.result_exitcode == 0:
-+            self.match_stonith_patterns()
-+
-+        print self.result_txt
-+        if self.verbose:
-+            print "--- END TEST - %s\n" % self.name
-+
-+        self.executed = 1
-+        return res
-
- class Tests:
--	def __init__(self, verbose = 0):
--		self.tests = []
--		self.verbose = verbose
--		self.autogen_corosync_cfg = 0
--		if not os.path.exists("/etc/corosync/corosync.conf"):
--			self.autogen_corosync_cfg = 1
--
--	def new_test(self, name, description, with_cpg = 0):
--		test = Test(name, description, self.verbose, with_cpg)
--		self.tests.append(test)
--		return test
--
--	def print_list(self):
--		print "\n==== %d TESTS FOUND ====" % (len(self.tests))
--		print "%35s - %s" % ("TEST NAME", "TEST DESCRIPTION")
--		print "%35s - %s" % ("--------------------", "--------------------")
--		for test in self.tests:
--			print "%35s - %s" % (test.name, test.description)
--		print "==== END OF LIST ====\n"
--
--
--	def start_corosync(self):
--		if self.verbose:
--			print "Starting corosync"
--
--		test = subprocess.Popen("corosync", stdout=subprocess.PIPE)
--		test.wait()
--		time.sleep(10)
--
--	def stop_corosync(self):
--		cmd = shlex.split("killall -9 -q corosync")
--		test = subprocess.Popen(cmd, stdout=subprocess.PIPE)
--		test.wait()
--
--	def run_single(self, name):
--		for test in self.tests:
--			if test.name == name:
--				test.run()
--				break;
--
--	def run_tests_matching(self, pattern):
--		for test in self.tests:
--			if test.name.count(pattern) != 0:
--				test.run()
--
--	def run_cpg_only(self):
--		for test in self.tests:
--			if test.enable_corosync:
--				test.run()
--
--	def run_no_cpg(self):
--		for test in self.tests:
--			if not test.enable_corosync:
--				test.run()
--
--	def run_tests(self):
--		for test in self.tests:
--			test.run()
--
--	def exit(self):
--		for test in self.tests:
--			if test.executed == 0:
--				continue
--
--			if test.get_exitcode() != 0:
--				sys.exit(-1)
--
--		sys.exit(0)
--
--	def print_results(self):
--		failures = 0;
--		success = 0;
--		print "\n\n======= FINAL RESULTS =========="
--		print "\n--- FAILURE RESULTS:"
--		for test in self.tests:
--			if test.executed == 0:
--				continue
--
--			if test.get_exitcode() != 0:
--				failures = failures + 1
--				test.print_result("    ")
--			else:
--				success = success + 1
--
--		if failures == 0:
--			print "    None"
--
--		print "\n--- TOTALS\n    Pass:%d\n    Fail:%d\n" % (success, failures)
--	def build_api_sanity_tests(self):
--		verbose_arg = ""
--		if self.verbose:
--			verbose_arg = "-V"
--
--		test = self.new_test("standalone_low_level_api_test", "Sanity test client api in \
                standalone mode.")
--		test.add_cmd("@CRM_DAEMON_DIR@/stonith-test", "-t %s" % (verbose_arg))
--
--		test = self.new_test("cpg_low_level_api_test", "Sanity test client api using \
                mainloop and cpg.", 1)
--		test.add_cmd("@CRM_DAEMON_DIR@/stonith-test", "-m %s" % (verbose_arg))
--
--	def build_custom_timeout_tests(self):
--		# custom timeout without topology
--		test = self.new_test("cpg_custom_timeout_1",
--				"Verify per device timeouts work as expected without using topology.", 1)
--		test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--		test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node3\" -o \"pcmk_off_timeout=1\"")
--		test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\" -o \"pcmk_off_timeout=4\"")
--		test.add_cmd("stonith_admin", "-F node3 -t 2")
--		# timeout is 2+1+4 = 7
--		test.add_stonith_log_pattern("remote op timeout set to 7")
--
--		# custom timeout _WITH_ topology
--		test = self.new_test("cpg_custom_timeout_2",
--				"Verify per device timeouts work as expected _WITH_ topology.", 1)
--		test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--		test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node3\" -o \"pcmk_off_timeout=1\"")
--		test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\" -o \"pcmk_off_timeout@00\"")
--		test.add_cmd("stonith_admin", "-r node3 -i 1 -v false1")
--		test.add_cmd("stonith_admin", "-r node3 -i 2 -v true1")
--		test.add_cmd("stonith_admin", "-r node3 -i 3 -v false2")
--		test.add_cmd("stonith_admin", "-F node3 -t 2")
--		# timeout is 2+1+4000 = 4003
--		test.add_stonith_log_pattern("remote op timeout set to 4003")
--
--	def build_fence_merge_tests(self):
--
--		### Simple test that overlapping fencing operations get merged
--		test = self.new_test("cpg_custom_merge_single",
--				"Verify overlapping identical fencing operations are merged, no fencing levels \
                used.", 1)
--		test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\"")
--		test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node3\" ")
--		test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\"")
--		test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
--		test.add_cmd("stonith_admin", "-F node3 -t 10")
--		### one merger will happen
--		test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
--		### the pattern below signifies that both the original and duplicate operation \
                completed
--		test.add_stonith_log_pattern("Operation off of node3 by")
--		test.add_stonith_log_pattern("Operation off of node3 by")
--
--		### Test that multiple mergers occur
--		test = self.new_test("cpg_custom_merge_multiple",
--				"Verify multiple overlapping identical fencing operations are merged", 1)
--		test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\"")
--		test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node3\" ")
--		test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\"")
--		test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
--		test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
--		test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
--		test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
--		test.add_cmd("stonith_admin", "-F node3 -t 10")
--		### 4 mergers should occur
--		test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
--		test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
--		test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
--		test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
--		### the pattern below signifies that both the original and duplicate operation \
                completed
--		test.add_stonith_log_pattern("Operation off of node3 by")
--		test.add_stonith_log_pattern("Operation off of node3 by")
--		test.add_stonith_log_pattern("Operation off of node3 by")
--		test.add_stonith_log_pattern("Operation off of node3 by")
--		test.add_stonith_log_pattern("Operation off of node3 by")
--
--		### Test that multiple mergers occur with topologies used
--		test = self.new_test("cpg_custom_merge_with_topology",
--				"Verify multiple overlapping identical fencing operations are merged with \
                fencing levels.", 1)
--		test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\"")
--		test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node3\" ")
--		test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\"")
--		test.add_cmd("stonith_admin", "-r node3 -i 1 -v false1")
--		test.add_cmd("stonith_admin", "-r node3 -i 1 -v false2")
--		test.add_cmd("stonith_admin", "-r node3 -i 2 -v true1")
--		test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
--		test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
--		test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
--		test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
--		test.add_cmd("stonith_admin", "-F node3 -t 10")
--		### 4 mergers should occur
--		test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
--		test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
--		test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
--		test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
--		### the pattern below signifies that both the original and duplicate operation \
                completed
--		test.add_stonith_log_pattern("Operation off of node3 by")
--		test.add_stonith_log_pattern("Operation off of node3 by")
--		test.add_stonith_log_pattern("Operation off of node3 by")
--		test.add_stonith_log_pattern("Operation off of node3 by")
--		test.add_stonith_log_pattern("Operation off of node3 by")
--
--
--		test = self.new_test("cpg_custom_no_merge",
--				"Verify differing fencing operations are not merged", 1)
--		test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3 node2\"")
--		test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node3 node2\" ")
--		test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3 node2\"")
--		test.add_cmd("stonith_admin", "-r node3 -i 1 -v false1")
--		test.add_cmd("stonith_admin", "-r node3 -i 1 -v false2")
--		test.add_cmd("stonith_admin", "-r node3 -i 2 -v true1")
--		test.add_cmd_no_wait("stonith_admin", "-F node2 -t 10")
--		test.add_cmd("stonith_admin", "-F node3 -t 10")
--		test.add_stonith_negative_log_pattern("Merging stonith action off for node node3 \
                originating from client")
--
--	def build_standalone_tests(self):
--		test_types = [
--			{
--				"prefix" : "standalone" ,
--				"use_cpg" : 0,
--			},
--			{
--				"prefix" : "cpg" ,
--				"use_cpg" : 1,
--			},
--		]
--
--		# test what happens when all devices timeout
--		for test_type in test_types:
--			test = self.new_test("%s_fence_multi_device_failure" % test_type["prefix"],
--					"Verify that all devices timeout, a fencing failure is returned.", \
                test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R false2  -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R false3 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			if test_type["use_cpg"] == 1:
--				test.add_expected_fail_cmd("stonith_admin", "-F node3 -t 2", 194)
--				test.add_stonith_log_pattern("remote op timeout set to 6")
--			else:
--				test.add_expected_fail_cmd("stonith_admin", "-F node3 -t 2", 55)
--
--			test.add_stonith_log_pattern("for host 'node3' with device 'false1' returned: ")
--			test.add_stonith_log_pattern("for host 'node3' with device 'false2' returned: ")
--			test.add_stonith_log_pattern("for host 'node3' with device 'false3' returned: ")
--
--		# test what happens when multiple devices can fence a node, but the first device \
                fails.
--		for test_type in test_types:
--			test = self.new_test("%s_fence_device_failure_rollover" % test_type["prefix"],
--					"Verify that when one fence device fails for a node, the others are tried.", \
                test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-F node3 -t 2")
--
--			if test_type["use_cpg"] == 1:
--				test.add_stonith_log_pattern("remote op timeout set to 6")
--
--		# simple topology test for one device
--		for test_type in test_types:
--			if test_type["use_cpg"] == 0:
--				continue
--
--			test = self.new_test("%s_topology_simple" % test_type["prefix"],
--					"Verify all fencing devices at a level are used.", test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--
--			test.add_cmd("stonith_admin", "-r node3 -i 1 -v true")
--			test.add_cmd("stonith_admin", "-F node3 -t 2")
--
--			test.add_stonith_log_pattern("remote op timeout set to 2")
--			test.add_stonith_log_pattern("for host 'node3' with device 'true' returned: 0")
--
--
--		# add topology, delete topology, verify fencing still works
--		for test_type in test_types:
--			if test_type["use_cpg"] == 0:
--				continue
--
--			test = self.new_test("%s_topology_add_remove" % test_type["prefix"],
--					"Verify fencing occurrs after all topology levels are removed", \
                test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--
--			test.add_cmd("stonith_admin", "-r node3 -i 1 -v true")
--			test.add_cmd("stonith_admin", "-d node3 -i 1")
--			test.add_cmd("stonith_admin", "-F node3 -t 2")
--
--			test.add_stonith_log_pattern("remote op timeout set to 2")
--			test.add_stonith_log_pattern("for host 'node3' with device 'true' returned: 0")
--
--		# test what happens when the first fencing level has multiple devices.
--		for test_type in test_types:
--			if test_type["use_cpg"] == 0:
--				continue
--
--			test = self.new_test("%s_topology_device_fails" % test_type["prefix"],
--					"Verify if one device in a level fails, the other is tried.", \
                test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R false  -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R true  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--
--			test.add_cmd("stonith_admin", "-r node3 -i 1 -v false")
--			test.add_cmd("stonith_admin", "-r node3 -i 2 -v true")
--			test.add_cmd("stonith_admin", "-F node3 -t 20")
--
--			test.add_stonith_log_pattern("remote op timeout set to 40")
--			test.add_stonith_log_pattern("for host 'node3' with device 'false' returned: \
                -201")
--			test.add_stonith_log_pattern("for host 'node3' with device 'true' returned: 0")
--
--		# test what happens when the first fencing level fails.
--		for test_type in test_types:
--			if test_type["use_cpg"] == 0:
--				continue
--
--			test = self.new_test("%s_topology_multi_level_fails" % test_type["prefix"],
--					"Verify if one level fails, the next leve is tried.", test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R true2  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R true3  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R true4  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--
--			test.add_cmd("stonith_admin", "-r node3 -i 1 -v false1")
--			test.add_cmd("stonith_admin", "-r node3 -i 1 -v true1")
--			test.add_cmd("stonith_admin", "-r node3 -i 2 -v true2")
--			test.add_cmd("stonith_admin", "-r node3 -i 2 -v false2")
--			test.add_cmd("stonith_admin", "-r node3 -i 3 -v true3")
--			test.add_cmd("stonith_admin", "-r node3 -i 3 -v true4")
--
--			test.add_cmd("stonith_admin", "-F node3 -t 2")
--
--			test.add_stonith_log_pattern("remote op timeout set to 12")
--			test.add_stonith_log_pattern("for host 'node3' with device 'false1' returned: \
                -201")
--			test.add_stonith_log_pattern("for host 'node3' with device 'false2' returned: \
                -201")
--			test.add_stonith_log_pattern("for host 'node3' with device 'true3' returned: 0")
--			test.add_stonith_log_pattern("for host 'node3' with device 'true4' returned: 0")
--
--
--		# test what happens when the first fencing level had devices that no one has \
                registered
--		for test_type in test_types:
--			if test_type["use_cpg"] == 0:
--				continue
--
--			test = self.new_test("%s_topology_missing_devices" % test_type["prefix"],
--					"Verify topology can continue with missing devices.", test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true2  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R true3  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R true4  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--
--			test.add_cmd("stonith_admin", "-r node3 -i 1 -v false1")
--			test.add_cmd("stonith_admin", "-r node3 -i 1 -v true1")
--			test.add_cmd("stonith_admin", "-r node3 -i 2 -v true2")
--			test.add_cmd("stonith_admin", "-r node3 -i 2 -v false2")
--			test.add_cmd("stonith_admin", "-r node3 -i 3 -v true3")
--			test.add_cmd("stonith_admin", "-r node3 -i 3 -v true4")
--
--			test.add_cmd("stonith_admin", "-F node3 -t 2")
--
--		# Test what happens if multiple fencing levels are defined, and then the first \
                one is removed.
--		for test_type in test_types:
--			if test_type["use_cpg"] == 0:
--				continue
--
--			test = self.new_test("%s_topology_level_removal" % test_type["prefix"],
--					"Verify level removal works.", test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R true2  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R true3  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R true4  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--
--			test.add_cmd("stonith_admin", "-r node3 -i 1 -v false1")
--			test.add_cmd("stonith_admin", "-r node3 -i 1 -v true1")
--
--			test.add_cmd("stonith_admin", "-r node3 -i 2 -v true2")
--			test.add_cmd("stonith_admin", "-r node3 -i 2 -v false2")
--
--			test.add_cmd("stonith_admin", "-r node3 -i 3 -v true3")
--			test.add_cmd("stonith_admin", "-r node3 -i 3 -v true4")
--
--			# Now remove level 2, verify none of the devices in level two are hit.
--			test.add_cmd("stonith_admin", "-d node3 -i 2")
--
--			test.add_cmd("stonith_admin", "-F node3 -t 20")
--
--			test.add_stonith_log_pattern("remote op timeout set to 8")
--			test.add_stonith_log_pattern("for host 'node3' with device 'false1' returned: \
                -201")
--			test.add_stonith_negative_log_pattern("for host 'node3' with device 'false2' \
                returned: ")
--			test.add_stonith_log_pattern("for host 'node3' with device 'true3' returned: 0")
--			test.add_stonith_log_pattern("for host 'node3' with device 'true4' returned: 0")
--
--		# test the stonith builds the correct list of devices that can fence a node.
--		for test_type in test_types:
--			test = self.new_test("%s_list_devices" % test_type["prefix"],
--					"Verify list of devices that can fence a node is correct", \
                test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node3\"")
--			test.add_cmd("stonith_admin", "-R true2 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-R true3 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--
--			test.add_cmd_check_stdout("stonith_admin", "-l node1 -V", "true2", "true1")
--			test.add_cmd_check_stdout("stonith_admin", "-l node1 -V", "true3", "true1")
--
--		# simple test of device monitor
--		for test_type in test_types:
--			test = self.new_test("%s_monitor" % test_type["prefix"],
--					"Verify device is reachable", test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node3\"")
--			test.add_cmd("stonith_admin", "-R false1  -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\"")
--
--			test.add_cmd("stonith_admin", "-Q true1")
--			test.add_cmd("stonith_admin", "-Q false1")
--			test.add_expected_fail_cmd("stonith_admin", "-Q true2", 237)
--
--		# Verify monitor occurs for duration of timeout period on failure
--		for test_type in test_types:
--			test = self.new_test("%s_monitor_timeout" % test_type["prefix"],
--					"Verify monitor uses duration of timeout period given.", test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true1  -a fence_dummy_monitor_fail -o \
                \"pcmk_host_list=node3\"")
--			test.add_expected_fail_cmd("stonith_admin", "-Q true1 -t 5", 195)
--			test.add_stonith_log_pattern("Attempt 2 to execute")
--
--		# Verify monitor occurs for duration of timeout period on failure, but stops at \
                max retries
--		for test_type in test_types:
--			test = self.new_test("%s_monitor_timeout_max_retries" % test_type["prefix"],
--					"Verify monitor retries until max retry value or timeout is hit.", \
                test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true1  -a fence_dummy_monitor_fail -o \
                \"pcmk_host_list=node3\"")
--			test.add_expected_fail_cmd("stonith_admin", "-Q true1 -t 15",195)
--			test.add_stonith_log_pattern("Attempted to execute agent \
                fence_dummy_monitor_fail (list) the maximum number of times")
--
--		# simple register test
--		for test_type in test_types:
--			test = self.new_test("%s_register" % test_type["prefix"],
--					"Verify devices can be registered and un-registered", test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node3\"")
--
--			test.add_cmd("stonith_admin", "-Q true1")
--
--			test.add_cmd("stonith_admin", "-D true1")
--
--			test.add_expected_fail_cmd("stonith_admin", "-Q true1", 237)
--
--
--		# simple reboot test
--		for test_type in test_types:
--			test = self.new_test("%s_reboot" % test_type["prefix"],
--					"Verify devices can be rebooted", test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node3\"")
--
--			test.add_cmd("stonith_admin", "-B node3 -t 2")
--
--			test.add_cmd("stonith_admin", "-D true1")
--
--			test.add_expected_fail_cmd("stonith_admin", "-Q true1", 237)
--
--		# test fencing history.
--		for test_type in test_types:
--			if test_type["use_cpg"] == 0:
--				continue
--			test = self.new_test("%s_fence_history" % test_type["prefix"],
--					"Verify last fencing operation is returned.", test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node3\"")
--
--			test.add_cmd("stonith_admin", "-F node3 -t 2 -V")
--
--			test.add_cmd_check_stdout("stonith_admin", "-H node3", "was able to turn off \
                node node3", "")
--
--		# simple test of dynamic list query
--		for test_type in test_types:
--			test = self.new_test("%s_dynamic_list_query" % test_type["prefix"],
--					"Verify dynamic list of fencing devices can be retrieved.", \
                test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true1  -a fence_dummy_list")
--			test.add_cmd("stonith_admin", "-R true2  -a fence_dummy_list")
--			test.add_cmd("stonith_admin", "-R true3  -a fence_dummy_list")
--
--			test.add_cmd_check_stdout("stonith_admin", "-l fake_port_1", "3 devices found")
--
--
--		# fence using dynamic list query
--		for test_type in test_types:
--			test = self.new_test("%s_fence_dynamic_list_query" % test_type["prefix"],
--					"Verify dynamic list of fencing devices can be retrieved.", \
                test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true1  -a fence_dummy_list")
--			test.add_cmd("stonith_admin", "-R true2  -a fence_dummy_list")
--			test.add_cmd("stonith_admin", "-R true3  -a fence_dummy_list")
--
--			test.add_cmd("stonith_admin", "-F fake_port_1 -t 5 -V");
--
--		# simple test of  query using status action
--		for test_type in test_types:
--			test = self.new_test("%s_status_query" % test_type["prefix"],
--					"Verify dynamic list of fencing devices can be retrieved.", \
                test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_check=status\"")
--			test.add_cmd("stonith_admin", "-R true2  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_check=status\"")
--			test.add_cmd("stonith_admin", "-R true3  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_check=status\"")
--
--			test.add_cmd_check_stdout("stonith_admin", "-l fake_port_1", "3 devices found")
--
--		# test what happens when no reboot action is advertised
--		for test_type in test_types:
--			test = self.new_test("%s_no_reboot_support" % test_type["prefix"],
--					"Verify reboot action defaults to off when no reboot action is advertised by \
                agent.", test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true1 -a fence_dummy_no_reboot -o \
                \"mode=pass\" -o \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-B node1 -t 5 -V");
--			test.add_stonith_log_pattern("does not advertise support for 'reboot', \
                performing 'off'")
--			test.add_stonith_log_pattern("with device 'true1' returned: 0 (OK)");
--
--		# make sure reboot is used when reboot action is advertised
--		for test_type in test_types:
--			test = self.new_test("%s_with_reboot_support" % test_type["prefix"],
--					"Verify reboot action can be used when metadata advertises it.", \
                test_type["use_cpg"])
--			test.add_cmd("stonith_admin", "-R true1 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
--			test.add_cmd("stonith_admin", "-B node1 -t 5 -V");
--			test.add_stonith_negative_log_pattern("does not advertise support for 'reboot', \
                performing 'off'")
--			test.add_stonith_log_pattern("with device 'true1' returned: 0 (OK)");
--
--	def build_nodeid_tests(self):
--		our_uname = output_from_command("uname -n")
--		if our_uname:
--			our_uname = our_uname[0]
--
--		### verify nodeid is supplied when nodeid is in the metadata parameters
--		test = self.new_test("cpg_supply_nodeid",
--				"Verify nodeid is given when fence agent has nodeid as parameter", 1)
--
--		test.add_cmd("stonith_admin", "-R true1 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-F %s -t 3" % (our_uname))
--		test.add_stonith_log_pattern("For stonith action (off) for victim %s, adding \
                nodeid" % (our_uname))
--
--		### verify nodeid is _NOT_ supplied when nodeid is not in the metadata parameters
--		test = self.new_test("cpg_do_not_supply_nodeid",
--				"Verify nodeid is _NOT_ given when fence agent does not have nodeid as \
                parameter", 1)
--
--		test.add_cmd("stonith_admin", "-R true1 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-F %s -t 3" % (our_uname))
--		test.add_stonith_negative_log_pattern("For stonith action (off) for victim %s, \
                adding nodeid" % (our_uname))
--
--		### verify nodeid use doesn't explode standalone mode
--		test = self.new_test("standalone_do_not_supply_nodeid",
--				"Verify nodeid in metadata parameter list doesn't kill standalone mode", 0)
--
--		test.add_cmd("stonith_admin", "-R true1 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-F %s -t 3" % (our_uname))
--		test.add_stonith_negative_log_pattern("For stonith action (off) for victim %s, \
                adding nodeid" % (our_uname))
--
--
--	def build_unfence_tests(self):
--		our_uname = output_from_command("uname -n")
--		if our_uname:
--			our_uname = our_uname[0]
--
--		### verify unfencing using automatic unfencing
--		test = self.new_test("cpg_unfence_required_1",
--				"Verify require unfencing on all devices when automatic=true in agent's \
                metadata", 1)
--		test.add_cmd("stonith_admin", "-R true1 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-R true2 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-U %s -t 3" % (our_uname))
--		# both devices should be executed
--		test.add_stonith_log_pattern("with device 'true1' returned: 0 (OK)");
--		test.add_stonith_log_pattern("with device 'true2' returned: 0 (OK)");
--
--
--		### verify unfencing using automatic unfencing fails if any of the required \
                agents fail
--		test = self.new_test("cpg_unfence_required_2",
--				"Verify require unfencing on all devices when automatic=true in agent's \
                metadata", 1)
--		test.add_cmd("stonith_admin", "-R true1 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-R true2 -a fence_dummy_automatic_unfence -o \
                \"modeúil\" -o \"pcmk_host_list=%s\"" % (our_uname))
--		test.add_expected_fail_cmd("stonith_admin", "-U %s -t 6" % (our_uname), 143)
--
--		### verify unfencing using automatic devices with topology
--		test = self.new_test("cpg_unfence_required_3",
--				"Verify require unfencing on all devices even when required devices are at \
                different topology levels", 1)
--		test.add_cmd("stonith_admin", "-R true1 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s node3\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-R true2 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s node3\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-r %s -i 1 -v true1" % (our_uname))
--		test.add_cmd("stonith_admin", "-r %s -i 2 -v true2" % (our_uname))
--		test.add_cmd("stonith_admin", "-U %s -t 3" % (our_uname))
--		test.add_stonith_log_pattern("with device 'true1' returned: 0 (OK)");
--		test.add_stonith_log_pattern("with device 'true2' returned: 0 (OK)");
--
--
--		### verify unfencing using automatic devices with topology
--		test = self.new_test("cpg_unfence_required_4",
--				"Verify all required devices are executed even with topology levels fail.", 1)
--		test.add_cmd("stonith_admin", "-R true1 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s node3\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-R true2 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s node3\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-R true3 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s node3\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-R true4 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s node3\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=%s node3\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=%s node3\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-R false3 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=%s node3\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-R false4 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=%s node3\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-r %s -i 1 -v true1" % (our_uname))
--		test.add_cmd("stonith_admin", "-r %s -i 1 -v false1" % (our_uname))
--		test.add_cmd("stonith_admin", "-r %s -i 2 -v false2" % (our_uname))
--		test.add_cmd("stonith_admin", "-r %s -i 2 -v true2" % (our_uname))
--		test.add_cmd("stonith_admin", "-r %s -i 2 -v false3" % (our_uname))
--		test.add_cmd("stonith_admin", "-r %s -i 2 -v true3" % (our_uname))
--		test.add_cmd("stonith_admin", "-r %s -i 3 -v false4" % (our_uname))
--		test.add_cmd("stonith_admin", "-r %s -i 4 -v true4" % (our_uname))
--		test.add_cmd("stonith_admin", "-U %s -t 3" % (our_uname))
--		test.add_stonith_log_pattern("with device 'true1' returned: 0 (OK)");
--		test.add_stonith_log_pattern("with device 'true2' returned: 0 (OK)");
--		test.add_stonith_log_pattern("with device 'true3' returned: 0 (OK)");
--		test.add_stonith_log_pattern("with device 'true4' returned: 0 (OK)");
--
--		### verify unfencing using on_target device
--		test = self.new_test("cpg_unfence_on_target_1",
--				"Verify unfencing with on_target = true", 1)
--		test.add_cmd("stonith_admin", "-R true1 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-U %s -t 3" % (our_uname))
--		test.add_stonith_log_pattern("(on) to be executed on the target node")
--
--
--		### verify failure of unfencing using on_target device
--		test = self.new_test("cpg_unfence_on_target_2",
--				"Verify failure unfencing with on_target = true", 1)
--		test.add_cmd("stonith_admin", "-R true1 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s node_fake_1234\"" % (our_uname))
--		test.add_expected_fail_cmd("stonith_admin", "-U node_fake_1234 -t 3", 237)
--		test.add_stonith_log_pattern("(on) to be executed on the target node")
--
--
--		### verify unfencing using on_target device with topology
--		test = self.new_test("cpg_unfence_on_target_3",
--				"Verify unfencing with on_target = true using topology", 1)
--
--		test.add_cmd("stonith_admin", "-R true1 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s node3\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-R true2 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s node3\"" % (our_uname))
--
--		test.add_cmd("stonith_admin", "-r %s -i 1 -v true1" % (our_uname))
--		test.add_cmd("stonith_admin", "-r %s -i 2 -v true2" % (our_uname))
--
--		test.add_cmd("stonith_admin", "-U %s -t 3" % (our_uname))
--		test.add_stonith_log_pattern("(on) to be executed on the target node")
--
--		### verify unfencing using on_target device with topology fails when victim node \
                doesn't exist
--		test = self.new_test("cpg_unfence_on_target_4",
--				"Verify unfencing failure with on_target = true using topology", 1)
--
--		test.add_cmd("stonith_admin", "-R true1 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s node_fake\"" % (our_uname))
--		test.add_cmd("stonith_admin", "-R true2 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s node_fake\"" % (our_uname))
--
--		test.add_cmd("stonith_admin", "-r node_fake -i 1 -v true1")
--		test.add_cmd("stonith_admin", "-r node_fake -i 2 -v true2")
--
--		test.add_expected_fail_cmd("stonith_admin", "-U node_fake -t 3", 237)
--		test.add_stonith_log_pattern("(on) to be executed on the target node")
--
--
--	def setup_environment(self, use_corosync):
--		if self.autogen_corosync_cfg and use_corosync:
--			corosync_conf = ("""
-+    def __init__(self, verbose = 0):
-+        self.tests = []
-+        self.verbose = verbose
-+        self.autogen_corosync_cfg = 0
-+        if not os.path.exists("/etc/corosync/corosync.conf"):
-+            self.autogen_corosync_cfg = 1
-+
-+    def new_test(self, name, description, with_cpg = 0):
-+        test = Test(name, description, self.verbose, with_cpg)
-+        self.tests.append(test)
-+        return test
-+
-+    def print_list(self):
-+        print "\n==== %d TESTS FOUND ====" % (len(self.tests))
-+        print "%35s - %s" % ("TEST NAME", "TEST DESCRIPTION")
-+        print "%35s - %s" % ("--------------------", "--------------------")
-+        for test in self.tests:
-+            print "%35s - %s" % (test.name, test.description)
-+        print "==== END OF LIST ====\n"
-+
-+
-+    def start_corosync(self):
-+        if self.verbose:
-+            print "Starting corosync"
-+
-+        test = subprocess.Popen("corosync", stdout=subprocess.PIPE)
-+        test.wait()
-+        time.sleep(10)
-+
-+    def stop_corosync(self):
-+        cmd = shlex.split("killall -9 -q corosync")
-+        test = subprocess.Popen(cmd, stdout=subprocess.PIPE)
-+        test.wait()
-+
-+    def run_single(self, name):
-+        for test in self.tests:
-+            if test.name == name:
-+                test.run()
-+                break;
-+
-+    def run_tests_matching(self, pattern):
-+        for test in self.tests:
-+            if test.name.count(pattern) != 0:
-+                test.run()
-+
-+    def run_cpg_only(self):
-+        for test in self.tests:
-+            if test.enable_corosync:
-+                test.run()
-+
-+    def run_no_cpg(self):
-+        for test in self.tests:
-+            if not test.enable_corosync:
-+                test.run()
-+
-+    def run_tests(self):
-+        for test in self.tests:
-+            test.run()
-+
-+    def exit(self):
-+        for test in self.tests:
-+            if test.executed == 0:
-+                continue
-+
-+            if test.get_exitcode() != 0:
-+                sys.exit(-1)
-+
-+        sys.exit(0)
-+
-+    def print_results(self):
-+        failures = 0;
-+        success = 0;
-+        print "\n\n======= FINAL RESULTS =========="
-+        print "\n--- FAILURE RESULTS:"
-+        for test in self.tests:
-+            if test.executed == 0:
-+                continue
-+
-+            if test.get_exitcode() != 0:
-+                failures = failures + 1
-+                test.print_result("    ")
-+            else:
-+                success = success + 1
-+
-+        if failures == 0:
-+            print "    None"
-+
-+        print "\n--- TOTALS\n    Pass:%d\n    Fail:%d\n" % (success, failures)
-+    def build_api_sanity_tests(self):
-+        verbose_arg = ""
-+        if self.verbose:
-+            verbose_arg = "-V"
-+
-+        test = self.new_test("standalone_low_level_api_test", "Sanity test client \
                api in standalone mode.")
-+        test.add_cmd("@CRM_DAEMON_DIR@/stonith-test", "-t %s" % (verbose_arg))
-+
-+        test = self.new_test("cpg_low_level_api_test", "Sanity test client api \
                using mainloop and cpg.", 1)
-+        test.add_cmd("@CRM_DAEMON_DIR@/stonith-test", "-m %s" % (verbose_arg))
-+
-+    def build_custom_timeout_tests(self):
-+        # custom timeout without topology
-+        test = self.new_test("cpg_custom_timeout_1",
-+                "Verify per device timeouts work as expected without using \
                topology.", 1)
-+        test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
-+        test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node3\" -o \"pcmk_off_timeout=1\"")
-+        test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\" -o \"pcmk_off_timeout=4\"")
-+        test.add_cmd("stonith_admin", "-F node3 -t 2")
-+        # timeout is 2+1+4 = 7
-+        test.add_stonith_log_pattern("remote op timeout set to 7")
-+
-+        # custom timeout _WITH_ topology
-+        test = self.new_test("cpg_custom_timeout_2",
-+                "Verify per device timeouts work as expected _WITH_ topology.", 1)
-+        test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node1 node2 node3\"")
-+        test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node3\" -o \"pcmk_off_timeout=1\"")
-+        test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\" -o \"pcmk_off_timeout@00\"")
-+        test.add_cmd("stonith_admin", "-r node3 -i 1 -v false1")
-+        test.add_cmd("stonith_admin", "-r node3 -i 2 -v true1")
-+        test.add_cmd("stonith_admin", "-r node3 -i 3 -v false2")
-+        test.add_cmd("stonith_admin", "-F node3 -t 2")
-+        # timeout is 2+1+4000 = 4003
-+        test.add_stonith_log_pattern("remote op timeout set to 4003")
-+
-+    def build_fence_merge_tests(self):
-+
-+        ### Simple test that overlapping fencing operations get merged
-+        test = self.new_test("cpg_custom_merge_single",
-+                "Verify overlapping identical fencing operations are merged, no \
                fencing levels used.", 1)
-+        test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\"")
-+        test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node3\" ")
-+        test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\"")
-+        test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
-+        test.add_cmd("stonith_admin", "-F node3 -t 10")
-+        ### one merger will happen
-+        test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
-+        ### the pattern below signifies that both the original and duplicate \
                operation completed
-+        test.add_stonith_log_pattern("Operation off of node3 by")
-+        test.add_stonith_log_pattern("Operation off of node3 by")
-+
-+        ### Test that multiple mergers occur
-+        test = self.new_test("cpg_custom_merge_multiple",
-+                "Verify multiple overlapping identical fencing operations are \
                merged", 1)
-+        test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\"")
-+        test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"delay=2\" -o \"pcmk_host_list=node3\" ")
-+        test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\"")
-+        test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
-+        test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
-+        test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
-+        test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
-+        test.add_cmd("stonith_admin", "-F node3 -t 10")
-+        ### 4 mergers should occur
-+        test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
-+        test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
-+        test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
-+        test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
-+        ### the pattern below signifies that both the original and duplicate \
                operation completed
-+        test.add_stonith_log_pattern("Operation off of node3 by")
-+        test.add_stonith_log_pattern("Operation off of node3 by")
-+        test.add_stonith_log_pattern("Operation off of node3 by")
-+        test.add_stonith_log_pattern("Operation off of node3 by")
-+        test.add_stonith_log_pattern("Operation off of node3 by")
-+
-+        ### Test that multiple mergers occur with topologies used
-+        test = self.new_test("cpg_custom_merge_with_topology",
-+                "Verify multiple overlapping identical fencing operations are \
                merged with fencing levels.", 1)
-+        test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\"")
-+        test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node3\" ")
-+        test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3\"")
-+        test.add_cmd("stonith_admin", "-r node3 -i 1 -v false1")
-+        test.add_cmd("stonith_admin", "-r node3 -i 1 -v false2")
-+        test.add_cmd("stonith_admin", "-r node3 -i 2 -v true1")
-+        test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
-+        test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
-+        test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
-+        test.add_cmd_no_wait("stonith_admin", "-F node3 -t 10")
-+        test.add_cmd("stonith_admin", "-F node3 -t 10")
-+        ### 4 mergers should occur
-+        test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
-+        test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
-+        test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
-+        test.add_stonith_log_pattern("Merging stonith action off for node node3 \
                originating from client")
-+        ### the pattern below signifies that both the original and duplicate \
                operation completed
-+        test.add_stonith_log_pattern("Operation off of node3 by")
-+        test.add_stonith_log_pattern("Operation off of node3 by")
-+        test.add_stonith_log_pattern("Operation off of node3 by")
-+        test.add_stonith_log_pattern("Operation off of node3 by")
-+        test.add_stonith_log_pattern("Operation off of node3 by")
-+
-+
-+        test = self.new_test("cpg_custom_no_merge",
-+                "Verify differing fencing operations are not merged", 1)
-+        test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3 node2\"")
-+        test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=node3 node2\" ")
-+        test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=node3 node2\"")
-+        test.add_cmd("stonith_admin", "-r node3 -i 1 -v false1")
-+        test.add_cmd("stonith_admin", "-r node3 -i 1 -v false2")
-+        test.add_cmd("stonith_admin", "-r node3 -i 2 -v true1")
-+        test.add_cmd_no_wait("stonith_admin", "-F node2 -t 10")
-+        test.add_cmd("stonith_admin", "-F node3 -t 10")
-+        test.add_stonith_negative_log_pattern("Merging stonith action off for node \
                node3 originating from client")
-+
-+    def build_standalone_tests(self):
-+        test_types = [
-+            {
-+                "prefix" : "standalone" ,
-+                "use_cpg" : 0,
-+            },
-+            {
-+                "prefix" : "cpg" ,
-+                "use_cpg" : 1,
-+            },
-+        ]
-+
-+        # test what happens when all devices timeout
-+        for test_type in test_types:
-+            test = self.new_test("%s_fence_multi_device_failure" % \
                test_type["prefix"],
-+                    "Verify that all devices timeout, a fencing failure is \
                returned.", test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R false2  -a fence_dummy -o \"modeúil\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R false3 -a fence_dummy -o \"modeúil\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+            if test_type["use_cpg"] == 1:
-+                test.add_expected_fail_cmd("stonith_admin", "-F node3 -t 2", 194)
-+                test.add_stonith_log_pattern("remote op timeout set to 6")
-+            else:
-+                test.add_expected_fail_cmd("stonith_admin", "-F node3 -t 2", 55)
-+
-+            test.add_stonith_log_pattern("for host 'node3' with device 'false1' \
                returned: ")
-+            test.add_stonith_log_pattern("for host 'node3' with device 'false2' \
                returned: ")
-+            test.add_stonith_log_pattern("for host 'node3' with device 'false3' \
                returned: ")
-+
-+        # test what happens when multiple devices can fence a node, but the first \
                device fails.
-+        for test_type in test_types:
-+            test = self.new_test("%s_fence_device_failure_rollover" % \
                test_type["prefix"],
-+                    "Verify that when one fence device fails for a node, the others \
                are tried.", test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-F node3 -t 2")
-+
-+            if test_type["use_cpg"] == 1:
-+                test.add_stonith_log_pattern("remote op timeout set to 6")
-+
-+        # simple topology test for one device
-+        for test_type in test_types:
-+            if test_type["use_cpg"] == 0:
-+                continue
-+
-+            test = self.new_test("%s_topology_simple" % test_type["prefix"],
-+                    "Verify all fencing devices at a level are used.", \
                test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true  -a fence_dummy -o \"mode=pass\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+
-+            test.add_cmd("stonith_admin", "-r node3 -i 1 -v true")
-+            test.add_cmd("stonith_admin", "-F node3 -t 2")
-+
-+            test.add_stonith_log_pattern("remote op timeout set to 2")
-+            test.add_stonith_log_pattern("for host 'node3' with device 'true' \
                returned: 0")
-+
-+
-+        # add topology, delete topology, verify fencing still works
-+        for test_type in test_types:
-+            if test_type["use_cpg"] == 0:
-+                continue
-+
-+            test = self.new_test("%s_topology_add_remove" % test_type["prefix"],
-+                    "Verify fencing occurrs after all topology levels are removed", \
                test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true  -a fence_dummy -o \"mode=pass\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+
-+            test.add_cmd("stonith_admin", "-r node3 -i 1 -v true")
-+            test.add_cmd("stonith_admin", "-d node3 -i 1")
-+            test.add_cmd("stonith_admin", "-F node3 -t 2")
-+
-+            test.add_stonith_log_pattern("remote op timeout set to 2")
-+            test.add_stonith_log_pattern("for host 'node3' with device 'true' \
                returned: 0")
-+
-+        # test what happens when the first fencing level has multiple devices.
-+        for test_type in test_types:
-+            if test_type["use_cpg"] == 0:
-+                continue
-+
-+            test = self.new_test("%s_topology_device_fails" % test_type["prefix"],
-+                    "Verify if one device in a level fails, the other is tried.", \
                test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R false  -a fence_dummy -o \"modeúil\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R true  -a fence_dummy -o \"mode=pass\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+
-+            test.add_cmd("stonith_admin", "-r node3 -i 1 -v false")
-+            test.add_cmd("stonith_admin", "-r node3 -i 2 -v true")
-+            test.add_cmd("stonith_admin", "-F node3 -t 20")
-+
-+            test.add_stonith_log_pattern("remote op timeout set to 40")
-+            test.add_stonith_log_pattern("for host 'node3' with device 'false' \
                returned: -201")
-+            test.add_stonith_log_pattern("for host 'node3' with device 'true' \
                returned: 0")
-+
-+        # test what happens when the first fencing level fails.
-+        for test_type in test_types:
-+            if test_type["use_cpg"] == 0:
-+                continue
-+
-+            test = self.new_test("%s_topology_multi_level_fails" % \
                test_type["prefix"],
-+                    "Verify if one level fails, the next leve is tried.", \
                test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R true2  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R true3  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R true4  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+
-+            test.add_cmd("stonith_admin", "-r node3 -i 1 -v false1")
-+            test.add_cmd("stonith_admin", "-r node3 -i 1 -v true1")
-+            test.add_cmd("stonith_admin", "-r node3 -i 2 -v true2")
-+            test.add_cmd("stonith_admin", "-r node3 -i 2 -v false2")
-+            test.add_cmd("stonith_admin", "-r node3 -i 3 -v true3")
-+            test.add_cmd("stonith_admin", "-r node3 -i 3 -v true4")
-+
-+            test.add_cmd("stonith_admin", "-F node3 -t 3")
-+
-+            test.add_stonith_log_pattern("remote op timeout set to 18")
-+            test.add_stonith_log_pattern("for host 'node3' with device 'false1' \
                returned: -201")
-+            test.add_stonith_log_pattern("for host 'node3' with device 'false2' \
                returned: -201")
-+            test.add_stonith_log_pattern("for host 'node3' with device 'true3' \
                returned: 0")
-+            test.add_stonith_log_pattern("for host 'node3' with device 'true4' \
                returned: 0")
-+
-+
-+        # test what happens when the first fencing level had devices that no one \
                has registered
-+        for test_type in test_types:
-+            if test_type["use_cpg"] == 0:
-+                continue
-+
-+            test = self.new_test("%s_topology_missing_devices" % \
                test_type["prefix"],
-+                    "Verify topology can continue with missing devices.", \
                test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true2  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R true3  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R true4  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+
-+            test.add_cmd("stonith_admin", "-r node3 -i 1 -v false1")
-+            test.add_cmd("stonith_admin", "-r node3 -i 1 -v true1")
-+            test.add_cmd("stonith_admin", "-r node3 -i 2 -v true2")
-+            test.add_cmd("stonith_admin", "-r node3 -i 2 -v false2")
-+            test.add_cmd("stonith_admin", "-r node3 -i 3 -v true3")
-+            test.add_cmd("stonith_admin", "-r node3 -i 3 -v true4")
-+
-+            test.add_cmd("stonith_admin", "-F node3 -t 2")
-+
-+        # Test what happens if multiple fencing levels are defined, and then the \
                first one is removed.
-+        for test_type in test_types:
-+            if test_type["use_cpg"] == 0:
-+                continue
-+
-+            test = self.new_test("%s_topology_level_removal" % test_type["prefix"],
-+                    "Verify level removal works.", test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R true2  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R true3  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R true4  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+
-+            test.add_cmd("stonith_admin", "-r node3 -i 1 -v false1")
-+            test.add_cmd("stonith_admin", "-r node3 -i 1 -v true1")
-+
-+            test.add_cmd("stonith_admin", "-r node3 -i 2 -v true2")
-+            test.add_cmd("stonith_admin", "-r node3 -i 2 -v false2")
-+
-+            test.add_cmd("stonith_admin", "-r node3 -i 3 -v true3")
-+            test.add_cmd("stonith_admin", "-r node3 -i 3 -v true4")
-+
-+            # Now remove level 2, verify none of the devices in level two are hit.
-+            test.add_cmd("stonith_admin", "-d node3 -i 2")
-+
-+            test.add_cmd("stonith_admin", "-F node3 -t 20")
-+
-+            test.add_stonith_log_pattern("remote op timeout set to 8")
-+            test.add_stonith_log_pattern("for host 'node3' with device 'false1' \
                returned: -201")
-+            test.add_stonith_negative_log_pattern("for host 'node3' with device \
                'false2' returned: ")
-+            test.add_stonith_log_pattern("for host 'node3' with device 'true3' \
                returned: 0")
-+            test.add_stonith_log_pattern("for host 'node3' with device 'true4' \
                returned: 0")
-+
-+        # test the stonith builds the correct list of devices that can fence a \
                node.
-+        for test_type in test_types:
-+            test = self.new_test("%s_list_devices" % test_type["prefix"],
-+                    "Verify list of devices that can fence a node is correct", \
                test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node3\"")
-+            test.add_cmd("stonith_admin", "-R true2 -a fence_dummy -o \"mode=pass\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-R true3 -a fence_dummy -o \"mode=pass\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+
-+            test.add_cmd_check_stdout("stonith_admin", "-l node1 -V", "true2", \
                "true1")
-+            test.add_cmd_check_stdout("stonith_admin", "-l node1 -V", "true3", \
                "true1")
-+
-+        # simple test of device monitor
-+        for test_type in test_types:
-+            test = self.new_test("%s_monitor" % test_type["prefix"],
-+                    "Verify device is reachable", test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node3\"")
-+            test.add_cmd("stonith_admin", "-R false1  -a fence_dummy -o \"modeúil\" \
                -o \"pcmk_host_list=node3\"")
-+
-+            test.add_cmd("stonith_admin", "-Q true1")
-+            test.add_cmd("stonith_admin", "-Q false1")
-+            test.add_expected_fail_cmd("stonith_admin", "-Q true2", 237)
-+
-+        # Verify monitor occurs for duration of timeout period on failure
-+        for test_type in test_types:
-+            test = self.new_test("%s_monitor_timeout" % test_type["prefix"],
-+                    "Verify monitor uses duration of timeout period given.", \
                test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true1  -a fence_dummy_monitor_fail -o \
                \"pcmk_host_list=node3\"")
-+            test.add_expected_fail_cmd("stonith_admin", "-Q true1 -t 5", 195)
-+            test.add_stonith_log_pattern("Attempt 2 to execute")
-+
-+        # Verify monitor occurs for duration of timeout period on failure, but \
                stops at max retries
-+        for test_type in test_types:
-+            test = self.new_test("%s_monitor_timeout_max_retries" % \
                test_type["prefix"],
-+                    "Verify monitor retries until max retry value or timeout is \
                hit.", test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true1  -a fence_dummy_monitor_fail -o \
                \"pcmk_host_list=node3\"")
-+            test.add_expected_fail_cmd("stonith_admin", "-Q true1 -t 15",195)
-+            test.add_stonith_log_pattern("Attempted to execute agent \
                fence_dummy_monitor_fail (list) the maximum number of times")
-+
-+        # simple register test
-+        for test_type in test_types:
-+            test = self.new_test("%s_register" % test_type["prefix"],
-+                    "Verify devices can be registered and un-registered", \
                test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node3\"")
-+
-+            test.add_cmd("stonith_admin", "-Q true1")
-+
-+            test.add_cmd("stonith_admin", "-D true1")
-+
-+            test.add_expected_fail_cmd("stonith_admin", "-Q true1", 237)
-+
-+
-+        # simple reboot test
-+        for test_type in test_types:
-+            test = self.new_test("%s_reboot" % test_type["prefix"],
-+                    "Verify devices can be rebooted", test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node3\"")
-+
-+            test.add_cmd("stonith_admin", "-B node3 -t 2")
-+
-+            test.add_cmd("stonith_admin", "-D true1")
-+
-+            test.add_expected_fail_cmd("stonith_admin", "-Q true1", 237)
-+
-+        # test fencing history.
-+        for test_type in test_types:
-+            if test_type["use_cpg"] == 0:
-+                continue
-+            test = self.new_test("%s_fence_history" % test_type["prefix"],
-+                    "Verify last fencing operation is returned.", \
                test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_list=node3\"")
-+
-+            test.add_cmd("stonith_admin", "-F node3 -t 2 -V")
-+
-+            test.add_cmd_check_stdout("stonith_admin", "-H node3", "was able to \
                turn off node node3", "")
-+
-+        # simple test of dynamic list query
-+        for test_type in test_types:
-+            test = self.new_test("%s_dynamic_list_query" % test_type["prefix"],
-+                    "Verify dynamic list of fencing devices can be retrieved.", \
                test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true1  -a fence_dummy_list")
-+            test.add_cmd("stonith_admin", "-R true2  -a fence_dummy_list")
-+            test.add_cmd("stonith_admin", "-R true3  -a fence_dummy_list")
-+
-+            test.add_cmd_check_stdout("stonith_admin", "-l fake_port_1", "3 devices \
                found")
-+
-+
-+        # fence using dynamic list query
-+        for test_type in test_types:
-+            test = self.new_test("%s_fence_dynamic_list_query" % \
                test_type["prefix"],
-+                    "Verify dynamic list of fencing devices can be retrieved.", \
                test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true1  -a fence_dummy_list")
-+            test.add_cmd("stonith_admin", "-R true2  -a fence_dummy_list")
-+            test.add_cmd("stonith_admin", "-R true3  -a fence_dummy_list")
-+
-+            test.add_cmd("stonith_admin", "-F fake_port_1 -t 5 -V");
-+
-+        # simple test of  query using status action
-+        for test_type in test_types:
-+            test = self.new_test("%s_status_query" % test_type["prefix"],
-+                    "Verify dynamic list of fencing devices can be retrieved.", \
                test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true1  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_check=status\"")
-+            test.add_cmd("stonith_admin", "-R true2  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_check=status\"")
-+            test.add_cmd("stonith_admin", "-R true3  -a fence_dummy -o \
                \"mode=pass\" -o \"pcmk_host_check=status\"")
-+
-+            test.add_cmd_check_stdout("stonith_admin", "-l fake_port_1", "3 devices \
                found")
-+
-+        # test what happens when no reboot action is advertised
-+        for test_type in test_types:
-+            test = self.new_test("%s_no_reboot_support" % test_type["prefix"],
-+                    "Verify reboot action defaults to off when no reboot action is \
                advertised by agent.", test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true1 -a fence_dummy_no_reboot -o \
                \"mode=pass\" -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-B node1 -t 5 -V");
-+            test.add_stonith_log_pattern("does not advertise support for 'reboot', \
                performing 'off'")
-+            test.add_stonith_log_pattern("with device 'true1' returned: 0 (OK)");
-+
-+        # make sure reboot is used when reboot action is advertised
-+        for test_type in test_types:
-+            test = self.new_test("%s_with_reboot_support" % test_type["prefix"],
-+                    "Verify reboot action can be used when metadata advertises \
                it.", test_type["use_cpg"])
-+            test.add_cmd("stonith_admin", "-R true1 -a fence_dummy -o \"mode=pass\" \
                -o \"pcmk_host_list=node1 node2 node3\"")
-+            test.add_cmd("stonith_admin", "-B node1 -t 5 -V");
-+            test.add_stonith_negative_log_pattern("does not advertise support for \
                'reboot', performing 'off'")
-+            test.add_stonith_log_pattern("with device 'true1' returned: 0 (OK)");
-+
-+    def build_nodeid_tests(self):
-+        our_uname = output_from_command("uname -n")
-+        if our_uname:
-+            our_uname = our_uname[0]
-+
-+        ### verify nodeid is supplied when nodeid is in the metadata parameters
-+        test = self.new_test("cpg_supply_nodeid",
-+                "Verify nodeid is given when fence agent has nodeid as parameter", \
                1)
-+
-+        test.add_cmd("stonith_admin", "-R true1 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-F %s -t 3" % (our_uname))
-+        test.add_stonith_log_pattern("For stonith action (off) for victim %s, \
                adding nodeid" % (our_uname))
-+
-+        ### verify nodeid is _NOT_ supplied when nodeid is not in the metadata \
                parameters
-+        test = self.new_test("cpg_do_not_supply_nodeid",
-+                "Verify nodeid is _NOT_ given when fence agent does not have nodeid \
                as parameter", 1)
-+
-+        test.add_cmd("stonith_admin", "-R true1 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-F %s -t 3" % (our_uname))
-+        test.add_stonith_negative_log_pattern("For stonith action (off) for victim \
                %s, adding nodeid" % (our_uname))
-+
-+        ### verify nodeid use doesn't explode standalone mode
-+        test = self.new_test("standalone_do_not_supply_nodeid",
-+                "Verify nodeid in metadata parameter list doesn't kill standalone \
                mode", 0)
-+
-+        test.add_cmd("stonith_admin", "-R true1 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-F %s -t 3" % (our_uname))
-+        test.add_stonith_negative_log_pattern("For stonith action (off) for victim \
                %s, adding nodeid" % (our_uname))
-+
-+
-+    def build_unfence_tests(self):
-+        our_uname = output_from_command("uname -n")
-+        if our_uname:
-+            our_uname = our_uname[0]
-+
-+        ### verify unfencing using automatic unfencing
-+        test = self.new_test("cpg_unfence_required_1",
-+                "Verify require unfencing on all devices when automatic=true in \
                agent's metadata", 1)
-+        test.add_cmd("stonith_admin", "-R true1 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-R true2 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-U %s -t 3" % (our_uname))
-+        # both devices should be executed
-+        test.add_stonith_log_pattern("with device 'true1' returned: 0 (OK)");
-+        test.add_stonith_log_pattern("with device 'true2' returned: 0 (OK)");
-+
-+
-+        ### verify unfencing using automatic unfencing fails if any of the required \
                agents fail
-+        test = self.new_test("cpg_unfence_required_2",
-+                "Verify require unfencing on all devices when automatic=true in \
                agent's metadata", 1)
-+        test.add_cmd("stonith_admin", "-R true1 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-R true2 -a fence_dummy_automatic_unfence -o \
                \"modeúil\" -o \"pcmk_host_list=%s\"" % (our_uname))
-+        test.add_expected_fail_cmd("stonith_admin", "-U %s -t 6" % (our_uname), \
                143)
-+
-+        ### verify unfencing using automatic devices with topology
-+        test = self.new_test("cpg_unfence_required_3",
-+                "Verify require unfencing on all devices even when required devices \
                are at different topology levels", 1)
-+        test.add_cmd("stonith_admin", "-R true1 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s node3\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-R true2 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s node3\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-r %s -i 1 -v true1" % (our_uname))
-+        test.add_cmd("stonith_admin", "-r %s -i 2 -v true2" % (our_uname))
-+        test.add_cmd("stonith_admin", "-U %s -t 3" % (our_uname))
-+        test.add_stonith_log_pattern("with device 'true1' returned: 0 (OK)");
-+        test.add_stonith_log_pattern("with device 'true2' returned: 0 (OK)");
-+
-+
-+        ### verify unfencing using automatic devices with topology
-+        test = self.new_test("cpg_unfence_required_4",
-+                "Verify all required devices are executed even with topology levels \
                fail.", 1)
-+        test.add_cmd("stonith_admin", "-R true1 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s node3\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-R true2 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s node3\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-R true3 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s node3\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-R true4 -a fence_dummy_automatic_unfence -o \
                \"mode=pass\" -o \"pcmk_host_list=%s node3\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-R false1 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=%s node3\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-R false2 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=%s node3\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-R false3 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=%s node3\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-R false4 -a fence_dummy -o \"modeúil\" -o \
                \"pcmk_host_list=%s node3\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-r %s -i 1 -v true1" % (our_uname))
-+        test.add_cmd("stonith_admin", "-r %s -i 1 -v false1" % (our_uname))
-+        test.add_cmd("stonith_admin", "-r %s -i 2 -v false2" % (our_uname))
-+        test.add_cmd("stonith_admin", "-r %s -i 2 -v true2" % (our_uname))
-+        test.add_cmd("stonith_admin", "-r %s -i 2 -v false3" % (our_uname))
-+        test.add_cmd("stonith_admin", "-r %s -i 2 -v true3" % (our_uname))
-+        test.add_cmd("stonith_admin", "-r %s -i 3 -v false4" % (our_uname))
-+        test.add_cmd("stonith_admin", "-r %s -i 4 -v true4" % (our_uname))
-+        test.add_cmd("stonith_admin", "-U %s -t 3" % (our_uname))
-+        test.add_stonith_log_pattern("with device 'true1' returned: 0 (OK)");
-+        test.add_stonith_log_pattern("with device 'true2' returned: 0 (OK)");
-+        test.add_stonith_log_pattern("with device 'true3' returned: 0 (OK)");
-+        test.add_stonith_log_pattern("with device 'true4' returned: 0 (OK)");
-+
-+        ### verify unfencing using on_target device
-+        test = self.new_test("cpg_unfence_on_target_1",
-+                "Verify unfencing with on_target = true", 1)
-+        test.add_cmd("stonith_admin", "-R true1 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-U %s -t 3" % (our_uname))
-+        test.add_stonith_log_pattern("(on) to be executed on the target node")
-+
-+
-+        ### verify failure of unfencing using on_target device
-+        test = self.new_test("cpg_unfence_on_target_2",
-+                "Verify failure unfencing with on_target = true", 1)
-+        test.add_cmd("stonith_admin", "-R true1 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s node_fake_1234\"" % (our_uname))
-+        test.add_expected_fail_cmd("stonith_admin", "-U node_fake_1234 -t 3", 237)
-+        test.add_stonith_log_pattern("(on) to be executed on the target node")
-+
-+
-+        ### verify unfencing using on_target device with topology
-+        test = self.new_test("cpg_unfence_on_target_3",
-+                "Verify unfencing with on_target = true using topology", 1)
-+
-+        test.add_cmd("stonith_admin", "-R true1 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s node3\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-R true2 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s node3\"" % (our_uname))
-+
-+        test.add_cmd("stonith_admin", "-r %s -i 1 -v true1" % (our_uname))
-+        test.add_cmd("stonith_admin", "-r %s -i 2 -v true2" % (our_uname))
-+
-+        test.add_cmd("stonith_admin", "-U %s -t 3" % (our_uname))
-+        test.add_stonith_log_pattern("(on) to be executed on the target node")
-+
-+        ### verify unfencing using on_target device with topology fails when victim \
                node doesn't exist
-+        test = self.new_test("cpg_unfence_on_target_4",
-+                "Verify unfencing failure with on_target = true using topology", 1)
-+
-+        test.add_cmd("stonith_admin", "-R true1 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s node_fake\"" % (our_uname))
-+        test.add_cmd("stonith_admin", "-R true2 -a fence_dummy -o \"mode=pass\" -o \
                \"pcmk_host_list=%s node_fake\"" % (our_uname))
-+
-+        test.add_cmd("stonith_admin", "-r node_fake -i 1 -v true1")
-+        test.add_cmd("stonith_admin", "-r node_fake -i 2 -v true2")
-+
-+        test.add_expected_fail_cmd("stonith_admin", "-U node_fake -t 3", 237)
-+        test.add_stonith_log_pattern("(on) to be executed on the target node")
-+
-+    def build_remap_tests(self):
-+        test = self.new_test("cpg_remap_simple",
-+                             "Verify sequential topology reboot is remapped to \
                all-off-then-all-on", 1)
-+        test.add_cmd("stonith_admin",
-+                     """-R true1 -a fence_dummy -o "mode=pass" -o \
                "pcmk_host_list=node_fake" """
-+                     """-o "pcmk_off_timeout=1" -o "pcmk_reboot_timeout" """)
-+        test.add_cmd("stonith_admin",
-+                     """-R true2 -a fence_dummy -o "mode=pass" -o \
                "pcmk_host_list=node_fake" """
-+                     """-o "pcmk_off_timeout=2" -o "pcmk_reboot_timeout " """)
-+        test.add_cmd("stonith_admin", "-r node_fake -i 1 -v true1 -v true2")
-+        test.add_cmd("stonith_admin", "-B node_fake -t 5")
-+        test.add_stonith_log_pattern("Remapping multiple-device reboot of \
                node_fake")
-+        # timeout should be sum of off timeouts (1+2=3), not reboot timeouts \
                (10+200)
-+        test.add_stonith_log_pattern("remote op timeout set to 3 for fencing of \
                node node_fake")
-+        test.add_stonith_log_pattern("perform op off node_fake with true1")
-+        test.add_stonith_log_pattern("perform op off node_fake with true2")
-+        test.add_stonith_log_pattern("Remapped off of node_fake complete, remapping \
                to on")
-+        # fence_dummy sets "on" as an on_target action
-+        test.add_stonith_log_pattern("Ignoring true1 'on' failure (no capable \
                peers) for node_fake")
-+        test.add_stonith_log_pattern("Ignoring true2 'on' failure (no capable \
                peers) for node_fake")
-+        test.add_stonith_log_pattern("Undoing remap of reboot of node_fake")
-+
-+        test = self.new_test("cpg_remap_automatic",
-+                             "Verify remapped topology reboot skips automatic \
                'on'", 1)
-+        test.add_cmd("stonith_admin",
-+                     """-R true1 -a fence_dummy_automatic_unfence """
-+                     """-o "mode=pass" -o "pcmk_host_list=node_fake" """)
-+        test.add_cmd("stonith_admin",
-+                     """-R true2 -a fence_dummy_automatic_unfence """
-+                     """-o "mode=pass" -o "pcmk_host_list=node_fake" """)
-+        test.add_cmd("stonith_admin", "-r node_fake -i 1 -v true1 -v true2")
-+        test.add_cmd("stonith_admin", "-B node_fake -t 5")
-+        test.add_stonith_log_pattern("Remapping multiple-device reboot of \
                node_fake")
-+        test.add_stonith_log_pattern("perform op off node_fake with true1")
-+        test.add_stonith_log_pattern("perform op off node_fake with true2")
-+        test.add_stonith_log_pattern("Remapped off of node_fake complete, remapping \
                to on")
-+        test.add_stonith_log_pattern("Undoing remap of reboot of node_fake")
-+        test.add_stonith_negative_log_pattern("perform op on node_fake with")
-+        test.add_stonith_negative_log_pattern("'on' failure")
-+
-+        test = self.new_test("cpg_remap_complex_1",
-+                "Verify remapped topology reboot in second level works if \
                non-remapped first level fails", 1)
-+        test.add_cmd("stonith_admin", """-R false1 -a fence_dummy -o "modeúil" -o \
                "pcmk_host_list=node_fake" """)
-+        test.add_cmd("stonith_admin", """-R true1 -a fence_dummy -o "mode=pass" -o \
                "pcmk_host_list=node_fake" """)
-+        test.add_cmd("stonith_admin", """-R true2 -a fence_dummy -o "mode=pass" -o \
                "pcmk_host_list=node_fake" """)
-+        test.add_cmd("stonith_admin", "-r node_fake -i 1 -v false1")
-+        test.add_cmd("stonith_admin", "-r node_fake -i 2 -v true1 -v true2")
-+        test.add_cmd("stonith_admin", "-B node_fake -t 5")
-+        test.add_stonith_log_pattern("perform op reboot node_fake with false1")
-+        test.add_stonith_log_pattern("Remapping multiple-device reboot of \
                node_fake")
-+        test.add_stonith_log_pattern("perform op off node_fake with true1")
-+        test.add_stonith_log_pattern("perform op off node_fake with true2")
-+        test.add_stonith_log_pattern("Remapped off of node_fake complete, remapping \
                to on")
-+        test.add_stonith_log_pattern("Ignoring true1 'on' failure (no capable \
                peers) for node_fake")
-+        test.add_stonith_log_pattern("Ignoring true2 'on' failure (no capable \
                peers) for node_fake")
-+        test.add_stonith_log_pattern("Undoing remap of reboot of node_fake")
-+
-+        test = self.new_test("cpg_remap_complex_2",
-+                "Verify remapped topology reboot failure in second level proceeds \
                to third level", 1)
-+        test.add_cmd("stonith_admin", """-R false1 -a fence_dummy -o "modeúil" -o \
                "pcmk_host_list=node_fake" """)
-+        test.add_cmd("stonith_admin", """-R false2 -a fence_dummy -o "modeúil" -o \
                "pcmk_host_list=node_fake" """)
-+        test.add_cmd("stonith_admin", """-R true1 -a fence_dummy -o "mode=pass" -o \
                "pcmk_host_list=node_fake" """)
-+        test.add_cmd("stonith_admin", """-R true2 -a fence_dummy -o "mode=pass" -o \
                "pcmk_host_list=node_fake" """)
-+        test.add_cmd("stonith_admin", """-R true3 -a fence_dummy -o "mode=pass" -o \
                "pcmk_host_list=node_fake" """)
-+        test.add_cmd("stonith_admin", "-r node_fake -i 1 -v false1")
-+        test.add_cmd("stonith_admin", "-r node_fake -i 2 -v true1 -v false2 -v \
                true3")
-+        test.add_cmd("stonith_admin", "-r node_fake -i 3 -v true2")
-+        test.add_cmd("stonith_admin", "-B node_fake -t 5")
-+        test.add_stonith_log_pattern("perform op reboot node_fake with false1")
-+        test.add_stonith_log_pattern("Remapping multiple-device reboot of \
                node_fake")
-+        test.add_stonith_log_pattern("perform op off node_fake with true1")
-+        test.add_stonith_log_pattern("perform op off node_fake with false2")
-+        test.add_stonith_log_pattern("Attempted to execute agent fence_dummy (off) \
                the maximum number of times")
-+        test.add_stonith_log_pattern("Undoing remap of reboot of node_fake")
-+        test.add_stonith_log_pattern("perform op reboot node_fake with true2")
-+        test.add_stonith_negative_log_pattern("node_fake with true3")
-+
-+    def setup_environment(self, use_corosync):
-+        if self.autogen_corosync_cfg and use_corosync:
-+            corosync_conf = ("""
- totem {
-         version: 2
-         crypto_cipher: none
-@@ -908,15 +984,15 @@ logging {
- }
- """)
-
--			os.system("cat <<-END >>/etc/corosync/corosync.conf\n%s\nEND" % (corosync_conf))
-+            os.system("cat <<-END >>/etc/corosync/corosync.conf\n%s\nEND" % \
                (corosync_conf))
-
-
--		if use_corosync:
--			### make sure we are in control ###
--			self.stop_corosync()
--			self.start_corosync()
-+        if use_corosync:
-+            ### make sure we are in control ###
-+            self.stop_corosync()
-+            self.start_corosync()
-
--		monitor_fail_agent = ("""#!/usr/bin/python
-+        monitor_fail_agent = ("""#!/usr/bin/python
- import sys
- def main():
-     for line in sys.stdin.readlines():
-@@ -927,7 +1003,7 @@ if __name__ == "__main__":
-     main()
- """)
-
--		dynamic_list_agent = ("""#!/usr/bin/python
-+        dynamic_list_agent = ("""#!/usr/bin/python
- import sys
- def main():
-     for line in sys.stdin.readlines():
-@@ -942,140 +1018,141 @@ if __name__ == "__main__":
- """)
-
-
--		os.system("cat <<-END >>/usr/sbin/fence_dummy_list\n%s\nEND" % \
                (dynamic_list_agent))
--		os.system("chmod 711 /usr/sbin/fence_dummy_list")
-+        os.system("cat <<-END >>/usr/sbin/fence_dummy_list\n%s\nEND" % \
                (dynamic_list_agent))
-+        os.system("chmod 711 /usr/sbin/fence_dummy_list")
-
--		os.system("cat <<-END >>/usr/sbin/fence_dummy_monitor_fail\n%s\nEND" % \
                (monitor_fail_agent))
--		os.system("chmod 711 /usr/sbin/fence_dummy_monitor_fail")
-+        os.system("cat <<-END >>/usr/sbin/fence_dummy_monitor_fail\n%s\nEND" % \
                (monitor_fail_agent))
-+        os.system("chmod 711 /usr/sbin/fence_dummy_monitor_fail")
-
--		os.system("cp /usr/share/pacemaker/tests/cts/fence_dummy /usr/sbin/fence_dummy")
-+        os.system("cp /usr/share/pacemaker/tests/cts/fence_dummy \
                /usr/sbin/fence_dummy")
-
--		# modifies dummy agent to do require unfencing
--		os.system("cat /usr/share/pacemaker/tests/cts/fence_dummy  | sed \
                's/on_target=/automatic=/g' > \
                /usr/sbin/fence_dummy_automatic_unfence");
--		os.system("chmod 711 /usr/sbin/fence_dummy_automatic_unfence")
-+        # modifies dummy agent to do require unfencing
-+        os.system("cat /usr/share/pacemaker/tests/cts/fence_dummy  | sed \
                's/on_target=/automatic=/g' > \
                /usr/sbin/fence_dummy_automatic_unfence");
-+        os.system("chmod 711 /usr/sbin/fence_dummy_automatic_unfence")
-
--		# modifies dummy agent to not advertise reboot
--		os.system("cat /usr/share/pacemaker/tests/cts/fence_dummy  | sed \
                's/^.*<action.*name.*reboot.*>.*//g' > \
                /usr/sbin/fence_dummy_no_reboot");
--		os.system("chmod 711 /usr/sbin/fence_dummy_no_reboot")
-+        # modifies dummy agent to not advertise reboot
-+        os.system("cat /usr/share/pacemaker/tests/cts/fence_dummy  | sed \
                's/^.*<action.*name.*reboot.*>.*//g' > \
                /usr/sbin/fence_dummy_no_reboot");
-+        os.system("chmod 711 /usr/sbin/fence_dummy_no_reboot")
-
--	def cleanup_environment(self, use_corosync):
--		if use_corosync:
--			self.stop_corosync()
-+    def cleanup_environment(self, use_corosync):
-+        if use_corosync:
-+            self.stop_corosync()
-
--			if self.verbose and os.path.exists('/var/log/corosync.log'):
--				print "Corosync output"
--				f = open('/var/log/corosync.log', 'r')
--				for line in f.readlines():
--					print line.strip()
--				os.remove('/var/log/corosync.log')
-+            if self.verbose and os.path.exists('/var/log/corosync.log'):
-+                print "Corosync output"
-+                f = open('/var/log/corosync.log', 'r')
-+                for line in f.readlines():
-+                    print line.strip()
-+                os.remove('/var/log/corosync.log')
-
--		if self.autogen_corosync_cfg:
--			os.system("rm -f /etc/corosync/corosync.conf")
-+        if self.autogen_corosync_cfg:
-+            os.system("rm -f /etc/corosync/corosync.conf")
-
--		os.system("rm -f /usr/sbin/fence_dummy_monitor_fail")
--		os.system("rm -f /usr/sbin/fence_dummy_list")
--		os.system("rm -f /usr/sbin/fence_dummy")
--		os.system("rm -f /usr/sbin/fence_dummy_automatic_unfence")
--		os.system("rm -f /usr/sbin/fence_dummy_no_reboot")
-+        os.system("rm -f /usr/sbin/fence_dummy_monitor_fail")
-+        os.system("rm -f /usr/sbin/fence_dummy_list")
-+        os.system("rm -f /usr/sbin/fence_dummy")
-+        os.system("rm -f /usr/sbin/fence_dummy_automatic_unfence")
-+        os.system("rm -f /usr/sbin/fence_dummy_no_reboot")
-
- class TestOptions:
--	def __init__(self):
--		self.options = {}
--		self.options['list-tests'] = 0
--		self.options['run-all'] = 1
--		self.options['run-only'] = ""
--		self.options['run-only-pattern'] = ""
--		self.options['verbose'] = 0
--		self.options['invalid-arg'] = ""
--		self.options['cpg-only'] = 0
--		self.options['no-cpg'] = 0
--		self.options['show-usage'] = 0
--
--	def build_options(self, argv):
--		args = argv[1:]
--		skip = 0
--		for i in range(0, len(args)):
--			if skip:
--				skip = 0
--				continue
--			elif args[i] == "-h" or args[i] == "--help":
--				self.options['show-usage'] = 1
--			elif args[i] == "-l" or args[i] == "--list-tests":
--				self.options['list-tests'] = 1
--			elif args[i] == "-V" or args[i] == "--verbose":
--				self.options['verbose'] = 1
--			elif args[i] == "-n" or args[i] == "--no-cpg":
--				self.options['no-cpg'] = 1
--			elif args[i] == "-c" or args[i] == "--cpg-only":
--				self.options['cpg-only'] = 1
--			elif args[i] == "-r" or args[i] == "--run-only":
--				self.options['run-only'] = args[i+1]
--				skip = 1
--			elif args[i] == "-p" or args[i] == "--run-only-pattern":
--				self.options['run-only-pattern'] = args[i+1]
--				skip = 1
--
--	def show_usage(self):
--		print "usage: " + sys.argv[0] + " [options]"
--		print "If no options are provided, all tests will run"
--		print "Options:"
--		print "\t [--help | -h]                        Show usage"
--		print "\t [--list-tests | -l]                  Print out all registered tests."
--		print "\t [--cpg-only | -c]                    Only run tests that require \
                corosync."
--		print "\t [--no-cpg | -n]                      Only run tests that do not require \
                corosync"
--		print "\t [--run-only | -r 'testname']         Run a specific test"
--		print "\t [--verbose | -V]                     Verbose output"
--		print "\t [--run-only-pattern | -p 'string']   Run only tests containing the \
                string value"
--		print "\n\tExample: Run only the test 'start_top'"
--		print "\t\t python ./regression.py --run-only start_stop"
--		print "\n\tExample: Run only the tests with the string 'systemd' present in them"
--		print "\t\t python ./regression.py --run-only-pattern systemd"
-+    def __init__(self):
-+        self.options = {}
-+        self.options['list-tests'] = 0
-+        self.options['run-all'] = 1
-+        self.options['run-only'] = ""
-+        self.options['run-only-pattern'] = ""
-+        self.options['verbose'] = 0
-+        self.options['invalid-arg'] = ""
-+        self.options['cpg-only'] = 0
-+        self.options['no-cpg'] = 0
-+        self.options['show-usage'] = 0
-+
-+    def build_options(self, argv):
-+        args = argv[1:]
-+        skip = 0
-+        for i in range(0, len(args)):
-+            if skip:
-+                skip = 0
-+                continue
-+            elif args[i] == "-h" or args[i] == "--help":
-+                self.options['show-usage'] = 1
-+            elif args[i] == "-l" or args[i] == "--list-tests":
-+                self.options['list-tests'] = 1
-+            elif args[i] == "-V" or args[i] == "--verbose":
-+                self.options['verbose'] = 1
-+            elif args[i] == "-n" or args[i] == "--no-cpg":
-+                self.options['no-cpg'] = 1
-+            elif args[i] == "-c" or args[i] == "--cpg-only":
-+                self.options['cpg-only'] = 1
-+            elif args[i] == "-r" or args[i] == "--run-only":
-+                self.options['run-only'] = args[i+1]
-+                skip = 1
-+            elif args[i] == "-p" or args[i] == "--run-only-pattern":
-+                self.options['run-only-pattern'] = args[i+1]
-+                skip = 1
-+
-+    def show_usage(self):
-+        print "usage: " + sys.argv[0] + " [options]"
-+        print "If no options are provided, all tests will run"
-+        print "Options:"
-+        print "\t [--help | -h]                        Show usage"
-+        print "\t [--list-tests | -l]                  Print out all registered \
                tests."
-+        print "\t [--cpg-only | -c]                    Only run tests that require \
                corosync."
-+        print "\t [--no-cpg | -n]                      Only run tests that do not \
                require corosync"
-+        print "\t [--run-only | -r 'testname']         Run a specific test"
-+        print "\t [--verbose | -V]                     Verbose output"
-+        print "\t [--run-only-pattern | -p 'string']   Run only tests containing \
                the string value"
-+        print "\n\tExample: Run only the test 'start_top'"
-+        print "\t\t python ./regression.py --run-only start_stop"
-+        print "\n\tExample: Run only the tests with the string 'systemd' present in \
                them"
-+        print "\t\t python ./regression.py --run-only-pattern systemd"
-
- def main(argv):
--	o = TestOptions()
--	o.build_options(argv)
--
--	use_corosync = 1
--
--	tests = Tests(o.options['verbose'])
--	tests.build_standalone_tests()
--	tests.build_custom_timeout_tests()
--	tests.build_api_sanity_tests()
--	tests.build_fence_merge_tests()
--	tests.build_unfence_tests()
--	tests.build_nodeid_tests()
--
--	if o.options['list-tests']:
--		tests.print_list()
--		sys.exit(0)
--	elif o.options['show-usage']:
--		o.show_usage()
--		sys.exit(0)
--
--	print "Starting ..."
--
--	if o.options['no-cpg']:
--		use_corosync = 0
--
--	tests.setup_environment(use_corosync)
--
--	if o.options['run-only-pattern'] != "":
--		tests.run_tests_matching(o.options['run-only-pattern'])
--		tests.print_results()
--	elif o.options['run-only'] != "":
--		tests.run_single(o.options['run-only'])
--		tests.print_results()
--	elif o.options['no-cpg']:
--		tests.run_no_cpg()
--		tests.print_results()
--	elif o.options['cpg-only']:
--		tests.run_cpg_only()
--		tests.print_results()
--	else:
--		tests.run_tests()
--		tests.print_results()
--
--	tests.cleanup_environment(use_corosync)
--	tests.exit()
-+    o = TestOptions()
-+    o.build_options(argv)
-+
-+    use_corosync = 1
-+
-+    tests = Tests(o.options['verbose'])
-+    tests.build_standalone_tests()
-+    tests.build_custom_timeout_tests()
-+    tests.build_api_sanity_tests()
-+    tests.build_fence_merge_tests()
-+    tests.build_unfence_tests()
-+    tests.build_nodeid_tests()
-+    tests.build_remap_tests()
-+
-+    if o.options['list-tests']:
-+        tests.print_list()
-+        sys.exit(0)
-+    elif o.options['show-usage']:
-+        o.show_usage()
-+        sys.exit(0)
-+
-+    print "Starting ..."
-+
-+    if o.options['no-cpg']:
-+        use_corosync = 0
-+
-+    tests.setup_environment(use_corosync)
-+
-+    if o.options['run-only-pattern'] != "":
-+        tests.run_tests_matching(o.options['run-only-pattern'])
-+        tests.print_results()
-+    elif o.options['run-only'] != "":
-+        tests.run_single(o.options['run-only'])
-+        tests.print_results()
-+    elif o.options['no-cpg']:
-+        tests.run_no_cpg()
-+        tests.print_results()
-+    elif o.options['cpg-only']:
-+        tests.run_cpg_only()
-+        tests.print_results()
-+    else:
-+        tests.run_tests()
-+        tests.print_results()
-+
-+    tests.cleanup_environment(use_corosync)
-+    tests.exit()
- if __name__=="__main__":
--	main(sys.argv)
-+    main(sys.argv)
-diff --git a/fencing/remote.c b/fencing/remote.c
-index a568035..2c00b5f 100644
---- a/fencing/remote.c
-+++ b/fencing/remote.c
-@@ -47,17 +47,37 @@
-
- #define TIMEOUT_MULTIPLY_FACTOR 1.2
-
-+/* When one stonithd queries its peers for devices able to handle a fencing
-+ * request, each peer will reply with a list of such devices available to it.
-+ * Each reply will be parsed into a st_query_result_t, with each device's
-+ * information kept in a device_properties_t.
-+ */
-+
-+typedef struct device_properties_s {
-+    /* Whether access to this device has been verified */
-+    gboolean verified;
-+
-+    /* The remaining members are indexed by the operation's "phase" */
-+
-+    /* Whether this device has been executed in each phase */
-+    gboolean executed[3];
-+    /* Whether this device is disallowed from executing in each phase */
-+    gboolean disallowed[3];
-+    /* Action-specific timeout for each phase */
-+    int custom_action_timeout[3];
-+    /* Action-specific maximum random delay for each phase */
-+    int delay_max[3];
-+} device_properties_t;
-+
- typedef struct st_query_result_s {
-+    /* Name of peer that sent this result */
-     char *host;
--    int devices;
--    /* only try peers for non-topology based operations once */
-+    /* Only try peers for non-topology based operations once */
-     gboolean tried;
--    GListPtr device_list;
--    GHashTable *custom_action_timeouts;
--    GHashTable *delay_maxes;
--    /* Subset of devices that peer has verified connectivity on */
--    GHashTable *verified_devices;
--
-+    /* Number of entries in the devices table */
-+    int ndevices;
-+    /* Devices available to this host that are capable of fencing the target */
-+    GHashTable *devices;
- } st_query_result_t;
-
- GHashTable *remote_op_list = NULL;
-@@ -67,8 +87,8 @@ extern xmlNode *stonith_create_op(int call_id, const char *token, \
                const char *op
-                                   int call_options);
-
- static void report_timeout_period(remote_fencing_op_t * op, int op_timeout);
--static int get_op_total_timeout(remote_fencing_op_t * op, st_query_result_t * \
                chosen_peer,
--                                int default_timeout);
-+static int get_op_total_timeout(const remote_fencing_op_t *op,
-+                                const st_query_result_t *chosen_peer);
-
- static gint
- sort_strings(gconstpointer a, gconstpointer b)
-@@ -83,15 +103,126 @@ free_remote_query(gpointer data)
-         st_query_result_t *query = data;
-
-         crm_trace("Free'ing query result from %s", query->host);
-+        g_hash_table_destroy(query->devices);
-         free(query->host);
--        g_list_free_full(query->device_list, free);
--        g_hash_table_destroy(query->custom_action_timeouts);
--        g_hash_table_destroy(query->delay_maxes);
--        g_hash_table_destroy(query->verified_devices);
-         free(query);
-     }
- }
-
-+struct peer_count_data {
-+    const remote_fencing_op_t *op;
-+    gboolean verified_only;
-+    int count;
-+};
-+
-+/*!
-+ * \internal
-+ * \brief Increment a counter if a device has not been executed yet
-+ *
-+ * \param[in] key        Device ID (ignored)
-+ * \param[in] value      Device properties
-+ * \param[in] user_data  Peer count data
-+ */
-+static void
-+count_peer_device(gpointer key, gpointer value, gpointer user_data)
-+{
-+    device_properties_t *props = (device_properties_t*)value;
-+    struct peer_count_data *data = user_data;
-+
-+    if (!props->executed[data->op->phase]
-+        && (!data->verified_only || props->verified)) {
-+        ++(data->count);
-+    }
-+}
-+
-+/*!
-+ * \internal
-+ * \brief Check the number of available devices in a peer's query results
-+ *
-+ * \param[in] op             Operation that results are for
-+ * \param[in] peer           Peer to count
-+ * \param[in] verified_only  Whether to count only verified devices
-+ *
-+ * \return Number of devices available to peer that were not already executed
-+ */
-+static int
-+count_peer_devices(const remote_fencing_op_t *op, const st_query_result_t *peer,
-+                   gboolean verified_only)
-+{
-+    struct peer_count_data data;
-+
-+    data.op = op;
-+    data.verified_only = verified_only;
-+    data.count = 0;
-+    if (peer) {
-+        g_hash_table_foreach(peer->devices, count_peer_device, &data);
-+    }
-+    return data.count;
-+}
-+
-+/*!
-+ * \internal
-+ * \brief Search for a device in a query result
-+ *
-+ * \param[in] op      Operation that result is for
-+ * \param[in] peer    Query result for a peer
-+ * \param[in] device  Device ID to search for
-+ *
-+ * \return Device properties if found, NULL otherwise
-+ */
-+static device_properties_t *
-+find_peer_device(const remote_fencing_op_t *op, const st_query_result_t *peer,
-+                 const char *device)
-+{
-+    device_properties_t *props = g_hash_table_lookup(peer->devices, device);
-+
-+    return (props && !props->executed[op->phase]
-+           && !props->disallowed[op->phase])? props : NULL;
-+}
-+
-+/*!
-+ * \internal
-+ * \brief Find a device in a peer's device list and mark it as executed
-+ *
-+ * \param[in]     op                     Operation that peer result is for
-+ * \param[in,out] peer                   Peer with results to search
-+ * \param[in]     device                 ID of device to mark as done
-+ * \param[in]     verified_devices_only  Only consider verified devices
-+ *
-+ * \return TRUE if device was found and marked, FALSE otherwise
-+ */
-+static gboolean
-+grab_peer_device(const remote_fencing_op_t *op, st_query_result_t *peer,
-+                 const char *device, gboolean verified_devices_only)
-+{
-+    device_properties_t *props = find_peer_device(op, peer, device);
-+
-+    if ((props == NULL) || (verified_devices_only && !props->verified)) {
-+        return FALSE;
-+    }
-+
-+    crm_trace("Removing %s from %s (%d remaining)",
-+              device, peer->host, count_peer_devices(op, peer, FALSE));
-+    props->executed[op->phase] = TRUE;
-+    return TRUE;
-+}
-+
-+/*
-+ * \internal
-+ * \brief Free the list of required devices for a particular phase
-+ *
-+ * \param[in,out] op     Operation to modify
-+ * \param[in]     phase  Phase to modify
-+ */
-+static void
-+free_required_list(remote_fencing_op_t *op, enum st_remap_phase phase)
-+{
-+    if (op->required_list[phase]) {
-+        g_list_free_full(op->required_list[phase], free);
-+        op->required_list[phase] = NULL;
-+    }
-+}
-+
- static void
- clear_remote_op_timers(remote_fencing_op_t * op)
- {
-@@ -137,13 +268,100 @@ free_remote_op(gpointer data)
-         g_list_free_full(op->devices_list, free);
-         op->devices_list = NULL;
-     }
--    if (op->required_list) {
--        g_list_free_full(op->required_list, free);
--        op->required_list = NULL;
--    }
-+    free_required_list(op, st_phase_requested);
-+    free_required_list(op, st_phase_off);
-+    free_required_list(op, st_phase_on);
-     free(op);
- }
-
-+/*
-+ * \internal
-+ * \brief Return an operation's originally requested action (before any remap)
-+ *
-+ * \param[in] op  Operation to check
-+ *
-+ * \return Operation's original action
-+ */
-+static const char *
-+op_requested_action(const remote_fencing_op_t *op)
-+{
-+    return ((op->phase > st_phase_requested)? "reboot" : op->action);
-+}
-+
-+/*
-+ * \internal
-+ * \brief Remap a "reboot" operation to the "off" phase
-+ *
-+ * \param[in,out] op      Operation to remap
-+ */
-+static void
-+op_phase_off(remote_fencing_op_t *op)
-+{
-+    crm_info("Remapping multiple-device reboot of %s (%s) to off",
-+             op->target, op->id);
-+    op->phase = st_phase_off;
-+
-+    /* Happily, "off" and "on" are shorter than "reboot", so we can reuse the
-+     * memory allocation at each phase.
-+     */
-+    strcpy(op->action, "off");
-+}
-+
-+/*!
-+ * \internal
-+ * \brief Advance a remapped reboot operation to the "on" phase
-+ *
-+ * \param[in,out] op  Operation to remap
-+ */
-+static void
-+op_phase_on(remote_fencing_op_t *op)
-+{
-+    GListPtr iter = NULL;
-+
-+    crm_info("Remapped off of %s complete, remapping to on for %s.%.8s",
-+             op->target, op->client_name, op->id);
-+    op->phase = st_phase_on;
-+    strcpy(op->action, "on");
-+
-+    /* Any devices that are required for "on" will be automatically executed by
-+     * the cluster when the node next joins, so we skip them here.
-+     */
-+    for (iter = op->required_list[op->phase]; iter != NULL; iter = iter->next) {
-+        GListPtr match = g_list_find_custom(op->devices_list, iter->data,
-+                                            sort_strings);
-+
-+        if (match) {
-+            op->devices_list = g_list_remove(op->devices_list, match->data);
-+        }
-+    }
-+
-+    /* We know this level will succeed, because phase 1 completed successfully
-+     * and we ignore any errors from phase 2. So we can free the required list,
-+     * which will keep them from being executed after the device list is done.
-+     */
-+    free_required_list(op, op->phase);
-+
-+    /* Rewind device list pointer */
-+    op->devices = op->devices_list;
-+}
-+
-+/*!
-+ * \internal
-+ * \brief Reset a remapped reboot operation
-+ *
-+ * \param[in,out] op  Operation to reset
-+ */
-+static void
-+undo_op_remap(remote_fencing_op_t *op)
-+{
-+    if (op->phase > 0) {
-+        crm_info("Undoing remap of reboot of %s for %s.%.8s",
-+                 op->target, op->client_name, op->id);
-+        op->phase = st_phase_requested;
-+        strcpy(op->action, "reboot");
-+    }
-+}
-+
- static xmlNode *
- create_op_done_notify(remote_fencing_op_t * op, int rc)
- {
-@@ -271,6 +489,7 @@ remote_op_done(remote_fencing_op_t * op, xmlNode * data, int rc, \
                int dup)
-
-     op->completed = time(NULL);
-     clear_remote_op_timers(op);
-+    undo_op_remap(op);
-
-     if (op->notify_sent == TRUE) {
-         crm_err("Already sent notifications for '%s of %s by %s' (for=%s@%s.%.8s, \
                state=%d): %s",
-@@ -279,10 +498,12 @@ remote_op_done(remote_fencing_op_t * op, xmlNode * data, int \
                rc, int dup)
-         goto remote_op_done_cleanup;
-     }
-
--    if (!op->delegate && data) {
-+    if (!op->delegate && data && rc != -ENODEV && rc != -EHOSTUNREACH) {
-         xmlNode *ndata = get_xpath_object("//@" F_STONITH_DELEGATE, data, \
                LOG_TRACE);
-         if(ndata) {
-             op->delegate = crm_element_value_copy(ndata, F_STONITH_DELEGATE);
-+        } else {
-+            op->delegate = crm_element_value_copy(data, F_ORIG);
-         }
-     }
-
-@@ -377,6 +598,16 @@ remote_op_timeout(gpointer userdata)
-
-     crm_debug("Action %s (%s) for %s (%s) timed out",
-               op->action, op->id, op->target, op->client_name);
-+
-+    if (op->phase == st_phase_on) {
-+        /* A remapped reboot operation timed out in the "on" phase, but the
-+         * "off" phase completed successfully, so quit trying any further
-+         * devices, and return success.
-+         */
-+        remote_op_done(op, NULL, pcmk_ok, FALSE);
-+        return FALSE;
-+    }
-+
-     op->state = st_failed;
-
-     remote_op_done(op, NULL, -ETIME, FALSE);
-@@ -426,22 +657,43 @@ topology_is_empty(stonith_topology_t *tp)
-     return TRUE;
- }
-
-+/*
-+ * \internal
-+ * \brief Add a device to the required list for a particular phase
-+ *
-+ * \param[in,out] op      Operation to modify
-+ * \param[in]     phase   Phase to modify
-+ * \param[in]     device  Device ID to add
-+ */
- static void
--add_required_device(remote_fencing_op_t * op, const char *device)
-+add_required_device(remote_fencing_op_t *op, enum st_remap_phase phase,
-+                    const char *device)
- {
--    GListPtr match  = g_list_find_custom(op->required_list, device, sort_strings);
--    if (match) {
--        /* device already marked required */
--        return;
-+    GListPtr match  = g_list_find_custom(op->required_list[phase], device,
-+                                         sort_strings);
-+
-+    if (!match) {
-+        op->required_list[phase] = g_list_prepend(op->required_list[phase],
-+                                                  strdup(device));
-     }
--    op->required_list = g_list_prepend(op->required_list, strdup(device));
-+}
-
--    /* make sure the required devices is in the current list of devices to be \
                executed */
--    if (op->devices_list) {
--        GListPtr match  = g_list_find_custom(op->devices_list, device, \
                sort_strings);
--        if (match == NULL) {
--           op->devices_list = g_list_append(op->devices_list, strdup(device));
--        }
-+/*
-+ * \internal
-+ * \brief Remove a device from the required list for the current phase
-+ *
-+ * \param[in,out] op      Operation to modify
-+ * \param[in]     device  Device ID to remove
-+ */
-+static void
-+remove_required_device(remote_fencing_op_t *op, const char *device)
-+{
-+    GListPtr match = g_list_find_custom(op->required_list[op->phase], device,
-+                                        sort_strings);
-+
-+    if (match) {
-+        op->required_list[op->phase] = g_list_remove(op->required_list[op->phase],
-+                                                     match->data);
-     }
- }
-
-@@ -458,18 +710,6 @@ set_op_device_list(remote_fencing_op_t * op, GListPtr devices)
-     for (lpc = devices; lpc != NULL; lpc = lpc->next) {
-         op->devices_list = g_list_append(op->devices_list, strdup(lpc->data));
-     }
--
--    /* tack on whatever required devices have not been executed
--     * to the end of the current devices list. This ensures that
--     * the required devices will get executed regardless of what topology
--     * level they exist at. */
--    for (lpc = op->required_list; lpc != NULL; lpc = lpc->next) {
--        GListPtr match  = g_list_find_custom(op->devices_list, lpc->data, \
                sort_strings);
--        if (match == NULL) {
--           op->devices_list = g_list_append(op->devices_list, strdup(lpc->data));
--        }
--    }
--
-     op->devices = op->devices_list;
- }
-
-@@ -491,6 +731,7 @@ find_topology_for_host(const char *host)
-                 crm_info("Bad regex '%s' for fencing level", tp->node);
-             } else {
-                 status = regexec(&r_patt, host, 0, NULL, 0);
-+                regfree(&r_patt);
-             }
-
-             if (status == 0) {
-@@ -529,6 +770,9 @@ stonith_topology_next(remote_fencing_op_t * op)
-
-     set_bit(op->call_options, st_opt_topology);
-
-+    /* This is a new level, so undo any remapping left over from previous */
-+    undo_op_remap(op);
-+
-     do {
-         op->level++;
-
-@@ -539,6 +783,15 @@ stonith_topology_next(remote_fencing_op_t * op)
-                   op->level, op->target, g_list_length(tp->levels[op->level]),
-                   op->client_name, op->originator, op->id);
-         set_op_device_list(op, tp->levels[op->level]);
-+
-+        if (g_list_next(op->devices_list) && safe_str_eq(op->action, "reboot")) {
-+            /* A reboot has been requested for a topology level with multiple
-+             * devices. Instead of rebooting the devices sequentially, we will
-+             * turn them all off, then turn them all on again. (Think about
-+             * switched power outlets for redundant power supplies.)
-+             */
-+            op_phase_off(op);
-+        }
-         return pcmk_ok;
-     }
-
-@@ -563,6 +816,7 @@ merge_duplicates(remote_fencing_op_t * op)
-     g_hash_table_iter_init(&iter, remote_op_list);
-     while (g_hash_table_iter_next(&iter, NULL, (void **)&other)) {
-         crm_node_t *peer = NULL;
-+        const char *other_action = op_requested_action(other);
-
-         if (other->state > st_exec) {
-             /* Must be in-progress */
-@@ -570,8 +824,9 @@ merge_duplicates(remote_fencing_op_t * op)
-         } else if (safe_str_neq(op->target, other->target)) {
-             /* Must be for the same node */
-             continue;
--        } else if (safe_str_neq(op->action, other->action)) {
--            crm_trace("Must be for the same action: %s vs. ", op->action, \
                other->action);
-+        } else if (safe_str_neq(op->action, other_action)) {
-+            crm_trace("Must be for the same action: %s vs. %s",
-+                      op->action, other_action);
-             continue;
-         } else if (safe_str_eq(op->client_name, other->client_name)) {
-             crm_trace("Must be for different clients: %s", op->client_name);
-@@ -602,7 +857,7 @@ merge_duplicates(remote_fencing_op_t * op)
-         if (other->total_timeout == 0) {
-             crm_trace("Making a best-guess as to the timeout used");
-             other->total_timeout = op->total_timeout --                \
                TIMEOUT_MULTIPLY_FACTOR * get_op_total_timeout(op, NULL, \
                op->base_timeout);
-+                TIMEOUT_MULTIPLY_FACTOR * get_op_total_timeout(op, NULL);
-         }
-         crm_notice
-             ("Merging stonith action %s for node %s originating from client %s.%.8s \
                with identical request from %s@%s.%.8s (%ds)",
-@@ -792,16 +1047,16 @@ initiate_remote_stonith_op(crm_client_t * client, xmlNode * \
                request, gboolean ma
-                        op->id, op->state);
-     }
-
--    query = stonith_create_op(op->client_callid, op->id, STONITH_OP_QUERY, NULL, \
                0);
-+    query = stonith_create_op(op->client_callid, op->id, STONITH_OP_QUERY,
-+                              NULL, op->call_options);
-
-     crm_xml_add(query, F_STONITH_REMOTE_OP_ID, op->id);
-     crm_xml_add(query, F_STONITH_TARGET, op->target);
--    crm_xml_add(query, F_STONITH_ACTION, op->action);
-+    crm_xml_add(query, F_STONITH_ACTION, op_requested_action(op));
-     crm_xml_add(query, F_STONITH_ORIGIN, op->originator);
-     crm_xml_add(query, F_STONITH_CLIENTID, op->client_id);
-     crm_xml_add(query, F_STONITH_CLIENTNAME, op->client_name);
-     crm_xml_add_int(query, F_STONITH_TIMEOUT, op->base_timeout);
--    crm_xml_add_int(query, F_STONITH_CALLOPTS, op->call_options);
-
-     send_cluster_message(NULL, crm_msg_stonith_ng, query, FALSE);
-     free_xml(query);
-@@ -835,7 +1090,7 @@ find_best_peer(const char *device, remote_fencing_op_t * op, \
                enum find_best_peer
-         st_query_result_t *peer = iter->data;
-
-         crm_trace("Testing result from %s for %s with %d devices: %d %x",
--                  peer->host, op->target, peer->devices, peer->tried, options);
-+                  peer->host, op->target, peer->ndevices, peer->tried, options);
-         if ((options & FIND_PEER_SKIP_TARGET) && safe_str_eq(peer->host, \
                op->target)) {
-             continue;
-         }
-@@ -844,25 +1099,13 @@ find_best_peer(const char *device, remote_fencing_op_t * op, \
                enum find_best_peer
-         }
-
-         if (is_set(op->call_options, st_opt_topology)) {
--            /* Do they have the next device of the current fencing level? */
--            GListPtr match = NULL;
--
--            if (verified_devices_only && \
                !g_hash_table_lookup(peer->verified_devices, device)) {
--                continue;
--            }
-
--            match = g_list_find_custom(peer->device_list, device, sort_strings);
--            if (match) {
--                crm_trace("Removing %s from %s (%d remaining)", (char \
                *)match->data, peer->host,
--                          g_list_length(peer->device_list));
--                peer->device_list = g_list_remove(peer->device_list, match->data);
-+            if (grab_peer_device(op, peer, device, verified_devices_only)) {
-                 return peer;
-             }
-
--        } else if (peer->devices > 0 && peer->tried == FALSE) {
--            if (verified_devices_only && \
                !g_hash_table_size(peer->verified_devices)) {
--                continue;
--            }
-+        } else if ((peer->tried == FALSE)
-+                   && count_peer_devices(op, peer, verified_devices_only)) {
-
-             /* No topology: Use the current best peer */
-             crm_trace("Simple fencing");
-@@ -883,11 +1126,14 @@ stonith_choose_peer(remote_fencing_op_t * op)
-     do {
-         if (op->devices) {
-             device = op->devices->data;
--            crm_trace("Checking for someone to fence %s with %s", op->target, \
                device);
-+            crm_trace("Checking for someone to fence (%s) %s with %s",
-+                      op->action, op->target, device);
-         } else {
--            crm_trace("Checking for someone to fence %s", op->target);
-+            crm_trace("Checking for someone to fence (%s) %s",
-+                      op->action, op->target);
-         }
-
-+        /* Best choice is a peer other than the target with verified access */
-         peer = find_best_peer(device, op, \
                FIND_PEER_SKIP_TARGET|FIND_PEER_VERIFIED_ONLY);
-         if (peer) {
-             crm_trace("Found verified peer %s for %s", peer->host, \
                device?device:"<any>");
-@@ -899,62 +1145,101 @@ stonith_choose_peer(remote_fencing_op_t * op)
-             return NULL;
-         }
-
-+        /* If no other peer has verified access, next best is unverified access */
-         peer = find_best_peer(device, op, FIND_PEER_SKIP_TARGET);
-         if (peer) {
-             crm_trace("Found best unverified peer %s", peer->host);
-             return peer;
-         }
-
--        peer = find_best_peer(device, op, FIND_PEER_TARGET_ONLY);
--        if(peer) {
--            crm_trace("%s will fence itself", peer->host);
--            return peer;
-+        /* If no other peer can do it, last option is self-fencing
-+         * (which is never allowed for the "on" phase of a remapped reboot)
-+         */
-+        if (op->phase != st_phase_on) {
-+            peer = find_best_peer(device, op, FIND_PEER_TARGET_ONLY);
-+            if (peer) {
-+                crm_trace("%s will fence itself", peer->host);
-+                return peer;
-+            }
-         }
-
--        /* Try the next fencing level if there is one */
--    } while (is_set(op->call_options, st_opt_topology)
-+        /* Try the next fencing level if there is one (unless we're in the "on"
-+         * phase of a remapped "reboot", because we ignore errors in that case)
-+         */
-+    } while ((op->phase != st_phase_on)
-+             && is_set(op->call_options, st_opt_topology)
-              && stonith_topology_next(op) == pcmk_ok);
-
--    crm_notice("Couldn't find anyone to fence %s with %s", op->target, \
                device?device:"<any>");
-+    crm_notice("Couldn't find anyone to fence (%s) %s with %s",
-+               op->action, op->target, (device? device : "any device"));
-     return NULL;
- }
-
- static int
--get_device_timeout(st_query_result_t * peer, const char *device, int \
                default_timeout)
-+get_device_timeout(const remote_fencing_op_t *op, const st_query_result_t *peer,
-+                   const char *device)
- {
--    gpointer res;
--    int delay_max = 0;
-+    device_properties_t *props;
-
-     if (!peer || !device) {
--        return default_timeout;
-+        return op->base_timeout;
-     }
-
--    res = g_hash_table_lookup(peer->delay_maxes, device);
--    if (res && GPOINTER_TO_INT(res) > 0) {
--        delay_max = GPOINTER_TO_INT(res);
-+    props = g_hash_table_lookup(peer->devices, device);
-+    if (!props) {
-+        return op->base_timeout;
-     }
-
--    res = g_hash_table_lookup(peer->custom_action_timeouts, device);
-+    return (props->custom_action_timeout[op->phase]?
-+           props->custom_action_timeout[op->phase] : op->base_timeout)
-+           + props->delay_max[op->phase];
-+}
-
--    return res ? GPOINTER_TO_INT(res) + delay_max : default_timeout + delay_max;
-+struct timeout_data {
-+    const remote_fencing_op_t *op;
-+    const st_query_result_t *peer;
-+    int total_timeout;
-+};
-+
-+/*!
-+ * \internal
-+ * \brief Add timeout to a total if device has not been executed yet
-+ *
-+ * \param[in] key        GHashTable key (device ID)
-+ * \param[in] value      GHashTable value (device properties)
-+ * \param[in] user_data  Timeout data
-+ */
-+static void
-+add_device_timeout(gpointer key, gpointer value, gpointer user_data)
-+{
-+    const char *device_id = key;
-+    device_properties_t *props = value;
-+    struct timeout_data *timeout = user_data;
-+
-+    if (!props->executed[timeout->op->phase]
-+        && !props->disallowed[timeout->op->phase]) {
-+        timeout->total_timeout += get_device_timeout(timeout->op,
-+                                                     timeout->peer, device_id);
-+    }
- }
-
- static int
--get_peer_timeout(st_query_result_t * peer, int default_timeout)
-+get_peer_timeout(const remote_fencing_op_t *op, const st_query_result_t *peer)
- {
--    int total_timeout = 0;
-+    struct timeout_data timeout;
-
--    GListPtr cur = NULL;
-+    timeout.op = op;
-+    timeout.peer = peer;
-+    timeout.total_timeout = 0;
-
--    for (cur = peer->device_list; cur; cur = cur->next) {
--        total_timeout += get_device_timeout(peer, cur->data, default_timeout);
--    }
-+    g_hash_table_foreach(peer->devices, add_device_timeout, &timeout);
-
--    return total_timeout ? total_timeout : default_timeout;
-+    return (timeout.total_timeout? timeout.total_timeout : op->base_timeout);
- }
-
- static int
--get_op_total_timeout(remote_fencing_op_t * op, st_query_result_t * chosen_peer, int \
                default_timeout)
-+get_op_total_timeout(const remote_fencing_op_t *op,
-+                     const st_query_result_t *chosen_peer)
- {
-     int total_timeout = 0;
-     stonith_topology_t *tp = find_topology_for_host(op->target);
-@@ -977,11 +1262,11 @@ get_op_total_timeout(remote_fencing_op_t * op, \
                st_query_result_t * chosen_peer,
-             }
-             for (device_list = tp->levels[i]; device_list; device_list = \
                device_list->next) {
-                 for (iter = op->query_results; iter != NULL; iter = iter->next) {
--                    st_query_result_t *peer = iter->data;
-+                    const st_query_result_t *peer = iter->data;
-
--                    if (g_list_find_custom(peer->device_list, device_list->data, \
                sort_strings)) {
--                        total_timeout +--                            \
                get_device_timeout(peer, device_list->data, default_timeout);
-+                    if (find_peer_device(op, peer, device_list->data)) {
-+                        total_timeout += get_device_timeout(op, peer,
-+                                                            device_list->data);
-                         break;
-                     }
-                 }               /* End Loop3: match device with peer that owns \
                device, find device's timeout period */
-@@ -989,12 +1274,12 @@ get_op_total_timeout(remote_fencing_op_t * op, \
                st_query_result_t * chosen_peer,
-         }                       /*End Loop1: iterate through fencing levels */
-
-     } else if (chosen_peer) {
--        total_timeout = get_peer_timeout(chosen_peer, default_timeout);
-+        total_timeout = get_peer_timeout(op, chosen_peer);
-     } else {
--        total_timeout = default_timeout;
-+        total_timeout = op->base_timeout;
-     }
-
--    return total_timeout ? total_timeout : default_timeout;
-+    return total_timeout ? total_timeout : op->base_timeout;
- }
-
- static void
-@@ -1049,6 +1334,55 @@ report_timeout_period(remote_fencing_op_t * op, int \
                op_timeout)
-     }
- }
-
-+/*
-+ * \internal
-+ * \brief Advance an operation to the next device in its topology
-+ *
-+ * \param[in,out] op      Operation to advance
-+ * \param[in]     device  ID of device just completed
-+ * \param[in]     msg     XML reply that contained device result (if available)
-+ * \param[in]     rc      Return code of device's execution
-+ */
-+static void
-+advance_op_topology(remote_fencing_op_t *op, const char *device, xmlNode *msg,
-+                    int rc)
-+{
-+    /* Advance to the next device at this topology level, if any */
-+    if (op->devices) {
-+        op->devices = op->devices->next;
-+    }
-+
-+    /* If this device was required, it's not anymore */
-+    remove_required_device(op, device);
-+
-+    /* If there are no more devices at this topology level,
-+     * run through any required devices not already executed
-+     */
-+    if (op->devices == NULL) {
-+        op->devices = op->required_list[op->phase];
-+    }
-+
-+    if ((op->devices == NULL) && (op->phase == st_phase_off)) {
-+        /* We're done with this level and with required devices, but we had
-+         * remapped "reboot" to "off", so start over with "on". If any devices
-+         * need to be turned back on, op->devices will be non-NULL after this.
-+         */
-+        op_phase_on(op);
-+    }
-+
-+    if (op->devices) {
-+        /* Necessary devices remain, so execute the next one */
-+        crm_trace("Next for %s on behalf of %s@%s (rc was %d)",
-+                  op->target, op->originator, op->client_name, rc);
-+        call_remote_stonith(op, NULL);
-+    } else {
-+        /* We're done with all devices and phases, so finalize operation */
-+        crm_trace("Marking complex fencing op for %s as complete", op->target);
-+        op->state = st_done;
-+        remote_op_done(op, msg, rc, FALSE);
-+    }
-+}
-+
- void
- call_remote_stonith(remote_fencing_op_t * op, st_query_result_t * peer)
- {
-@@ -1061,7 +1395,7 @@ call_remote_stonith(remote_fencing_op_t * op, \
                st_query_result_t * peer)
-     }
-
-     if (!op->op_timer_total) {
--        int total_timeout = get_op_total_timeout(op, peer, op->base_timeout);
-+        int total_timeout = get_op_total_timeout(op, peer);
-
-         op->total_timeout = TIMEOUT_MULTIPLY_FACTOR * total_timeout;
-         op->op_timer_total = g_timeout_add(1000 * op->total_timeout, \
                remote_op_timeout, op);
-@@ -1071,13 +1405,13 @@ call_remote_stonith(remote_fencing_op_t * op, \
                st_query_result_t * peer)
-     }
-
-     if (is_set(op->call_options, st_opt_topology) && op->devices) {
--        /* Ignore any preference, they might not have the device we need */
--        /* When using topology, the stonith_choose_peer function pops off
--         * the peer from the op's query results.  Make sure to calculate
--         * the op_timeout before calling this function when topology is in use */
-+        /* Ignore any peer preference, they might not have the device we need */
-+        /* When using topology, stonith_choose_peer() removes the device from
-+         * further consideration, so be sure to calculate timeout beforehand */
-         peer = stonith_choose_peer(op);
-+
-         device = op->devices->data;
--        timeout = get_device_timeout(peer, device, op->base_timeout);
-+        timeout = get_device_timeout(op, peer, device);
-     }
-
-     if (peer) {
-@@ -1094,15 +1428,15 @@ call_remote_stonith(remote_fencing_op_t * op, \
                st_query_result_t * peer)
-         crm_xml_add_int(remote_op, F_STONITH_CALLOPTS, op->call_options);
-
-         if (device) {
--            timeout_one --                TIMEOUT_MULTIPLY_FACTOR * \
                get_device_timeout(peer, device, op->base_timeout);
-+            timeout_one = TIMEOUT_MULTIPLY_FACTOR *
-+                          get_device_timeout(op, peer, device);
-             crm_info("Requesting that %s perform op %s %s with %s for %s (%ds)", \
                peer->host,
-                      op->action, op->target, device, op->client_name, timeout_one);
-             crm_xml_add(remote_op, F_STONITH_DEVICE, device);
-             crm_xml_add(remote_op, F_STONITH_MODE, "slave");
-
-         } else {
--            timeout_one = TIMEOUT_MULTIPLY_FACTOR * get_peer_timeout(peer, \
                op->base_timeout);
-+            timeout_one = TIMEOUT_MULTIPLY_FACTOR * get_peer_timeout(op, peer);
-             crm_info("Requesting that %s perform op %s %s for %s (%ds, %ds)",
-                      peer->host, op->action, op->target, op->client_name, \
                timeout_one, stonith_watchdog_timeout_ms);
-             crm_xml_add(remote_op, F_STONITH_MODE, "smart");
-@@ -1115,16 +1449,18 @@ call_remote_stonith(remote_fencing_op_t * op, \
                st_query_result_t * peer)
-         }
-
-         if(stonith_watchdog_timeout_ms > 0 && device && safe_str_eq(device, \
                "watchdog")) {
--            crm_notice("Waiting %ds for %s to self-terminate for %s.%.8s (%p)",
--                       stonith_watchdog_timeout_ms/1000, op->target, \
                op->client_name, op->id, device);
-+            crm_notice("Waiting %ds for %s to self-fence (%s) for %s.%.8s (%p)",
-+                       stonith_watchdog_timeout_ms/1000, op->target,
-+                       op->action, op->client_name, op->id, device);
-             op->op_timer_one = g_timeout_add(stonith_watchdog_timeout_ms, \
                remote_op_watchdog_done, op);
-
--            /* TODO: We should probably look into peer->device_list to verify \
                watchdog is going to be in use */
-+            /* TODO check devices to verify watchdog will be in use */
-         } else if(stonith_watchdog_timeout_ms > 0
-                   && safe_str_eq(peer->host, op->target)
-                   && safe_str_neq(op->action, "on")) {
--            crm_notice("Waiting %ds for %s to self-terminate for %s.%.8s (%p)",
--                       stonith_watchdog_timeout_ms/1000, op->target, \
                op->client_name, op->id, device);
-+            crm_notice("Waiting %ds for %s to self-fence (%s) for %s.%.8s (%p)",
-+                       stonith_watchdog_timeout_ms/1000, op->target,
-+                       op->action, op->client_name, op->id, device);
-             op->op_timer_one = g_timeout_add(stonith_watchdog_timeout_ms, \
                remote_op_watchdog_done, op);
-
-         } else {
-@@ -1137,13 +1473,23 @@ call_remote_stonith(remote_fencing_op_t * op, \
                st_query_result_t * peer)
-         free_xml(remote_op);
-         return;
-
-+    } else if (op->phase == st_phase_on) {
-+        /* A remapped "on" cannot be executed, but the node was already
-+         * turned off successfully, so ignore the error and continue.
-+         */
-+        crm_warn("Ignoring %s 'on' failure (no capable peers) for %s after \
                successful 'off'",
-+                 device, op->target);
-+        advance_op_topology(op, device, NULL, pcmk_ok);
-+        return;
-+
-     } else if (op->owner == FALSE) {
--        crm_err("The termination of %s for %s is not ours to control", op->target, \
                op->client_name);
-+        crm_err("Fencing (%s) of %s for %s is not ours to control",
-+                op->action, op->target, op->client_name);
-
-     } else if (op->query_timer == 0) {
-         /* We've exhausted all available peers */
--        crm_info("No remaining peers capable of terminating %s for %s (%d)", \
                op->target,
--                 op->client_name, op->state);
-+        crm_info("No remaining peers capable of fencing (%s) %s for %s (%d)",
-+                 op->target, op->action, op->client_name, op->state);
-         CRM_LOG_ASSERT(op->state < st_done);
-         remote_op_timeout(op);
-
-@@ -1153,33 +1499,37 @@ call_remote_stonith(remote_fencing_op_t * op, \
                st_query_result_t * peer)
-         /* if the operation never left the query state,
-          * but we have all the expected replies, then no devices
-          * are available to execute the fencing operation. */
-+
-         if(stonith_watchdog_timeout_ms && (device == NULL || safe_str_eq(device, \
                "watchdog"))) {
--            crm_notice("Waiting %ds for %s to self-terminate for %s.%.8s (%p)",
--                     stonith_watchdog_timeout_ms/1000, op->target, op->client_name, \
                op->id, device);
-+            crm_notice("Waiting %ds for %s to self-fence (%s) for %s.%.8s (%p)",
-+                     stonith_watchdog_timeout_ms/1000, op->target,
-+                     op->action, op->client_name, op->id, device);
-
-             op->op_timer_one = g_timeout_add(stonith_watchdog_timeout_ms, \
                remote_op_watchdog_done, op);
-             return;
-         }
-
-         if (op->state == st_query) {
--           crm_info("None of the %d peers have devices capable of terminating %s \
                for %s (%d)",
--                   op->replies, op->target, op->client_name, op->state);
-+           crm_info("None of the %d peers have devices capable of fencing (%s) %s \
                for %s (%d)",
-+                   op->replies, op->action, op->target, op->client_name,
-+                   op->state);
-
-             rc = -ENODEV;
-         } else {
--           crm_info("None of the %d peers are capable of terminating %s for %s \
                (%d)",
--                   op->replies, op->target, op->client_name, op->state);
-+           crm_info("None of the %d peers are capable of fencing (%s) %s for %s \
                (%d)",
-+                   op->replies, op->action, op->target, op->client_name,
-+                   op->state);
-         }
-
-         op->state = st_failed;
-         remote_op_done(op, NULL, rc, FALSE);
-
-     } else if (device) {
--        crm_info("Waiting for additional peers capable of terminating %s with %s \
                for %s.%.8s",
--                 op->target, device, op->client_name, op->id);
-+        crm_info("Waiting for additional peers capable of fencing (%s) %s with %s \
                for %s.%.8s",
-+                 op->action, op->target, device, op->client_name, op->id);
-     } else {
--        crm_info("Waiting for additional peers capable of terminating %s for \
                %s%.8s",
--                 op->target, op->client_name, op->id);
-+        crm_info("Waiting for additional peers capable of fencing (%s) %s for \
                %s%.8s",
-+                 op->action, op->target, op->client_name, op->id);
-     }
- }
-
-@@ -1200,7 +1550,7 @@ sort_peers(gconstpointer a, gconstpointer b)
-     const st_query_result_t *peer_a = a;
-     const st_query_result_t *peer_b = b;
-
--    return (peer_b->devices - peer_a->devices);
-+    return (peer_b->ndevices - peer_a->ndevices);
- }
-
- /*!
-@@ -1212,7 +1562,7 @@ all_topology_devices_found(remote_fencing_op_t * op)
- {
-     GListPtr device = NULL;
-     GListPtr iter = NULL;
--    GListPtr match = NULL;
-+    device_properties_t *match = NULL;
-     stonith_topology_t *tp = NULL;
-     gboolean skip_target = FALSE;
-     int i;
-@@ -1236,7 +1586,7 @@ all_topology_devices_found(remote_fencing_op_t * op)
-                 if (skip_target && safe_str_eq(peer->host, op->target)) {
-                     continue;
-                 }
--                match = g_list_find_custom(peer->device_list, device->data, \
                sort_strings);
-+                match = find_peer_device(op, peer, device->data);
-             }
-             if (!match) {
-                 return FALSE;
-@@ -1247,10 +1597,169 @@ all_topology_devices_found(remote_fencing_op_t * op)
-     return TRUE;
- }
-
-+/*
-+ * \internal
-+ * \brief Parse action-specific device properties from XML
-+ *
-+ * \param[in]     msg     XML element containing the properties
-+ * \param[in]     peer    Name of peer that sent XML (for logs)
-+ * \param[in]     device  Device ID (for logs)
-+ * \param[in]     action  Action the properties relate to (for logs)
-+ * \param[in]     phase   Phase the properties relate to
-+ * \param[in,out] props   Device properties to update
-+ */
-+static void
-+parse_action_specific(xmlNode *xml, const char *peer, const char *device,
-+                      const char *action, remote_fencing_op_t *op,
-+                      enum st_remap_phase phase, device_properties_t *props)
-+{
-+    int required;
-+
-+    props->custom_action_timeout[phase] = 0;
-+    crm_element_value_int(xml, F_STONITH_ACTION_TIMEOUT,
-+                          &props->custom_action_timeout[phase]);
-+    if (props->custom_action_timeout[phase]) {
-+        crm_trace("Peer %s with device %s returned %s action timeout %d",
-+                  peer, device, action, props->custom_action_timeout[phase]);
-+    }
-+
-+    props->delay_max[phase] = 0;
-+    crm_element_value_int(xml, F_STONITH_DELAY_MAX, &props->delay_max[phase]);
-+    if (props->delay_max[phase]) {
-+        crm_trace("Peer %s with device %s returned maximum of random delay %d for \
                %s",
-+                  peer, device, props->delay_max[phase], action);
-+    }
-+
-+    required = 0;
-+    crm_element_value_int(xml, F_STONITH_DEVICE_REQUIRED, &required);
-+    if (required) {
-+        /* If the action is marked as required, add the device to the
-+         * operation's list of required devices for this phase. We use this
-+         * for unfencing when executing a topology. In phase 0 (requested
-+         * action) or phase 1 (remapped "off"), required devices get executed
-+         * regardless of their topology level; in phase 2 (remapped "on"),
-+         * required devices are not attempted, because the cluster will
-+         * execute them automatically later.
-+         */
-+        crm_trace("Peer %s requires device %s to execute for action %s",
-+                  peer, device, action);
-+        add_required_device(op, phase, device);
-+    }
-+
-+    /* If a reboot is remapped to off+on, it's possible that a node is allowed
-+     * to perform one action but not another.
-+     */
-+    if (crm_is_true(crm_element_value(xml, F_STONITH_ACTION_DISALLOWED))) {
-+        props->disallowed[phase] = TRUE;
-+        crm_trace("Peer %s is disallowed from executing %s for device %s",
-+                  peer, action, device);
-+    }
-+}
-+
-+/*
-+ * \internal
-+ * \brief Parse one device's properties from peer's XML query reply
-+ *
-+ * \param[in]     xml       XML node containing device properties
-+ * \param[in,out] op        Operation that query and reply relate to
-+ * \param[in,out] result    Peer's results
-+ * \param[in]     device    ID of device being parsed
-+ */
-+static void
-+add_device_properties(xmlNode *xml, remote_fencing_op_t *op,
-+                      st_query_result_t *result, const char *device)
-+{
-+    xmlNode *child;
-+    int verified = 0;
-+    device_properties_t *props = calloc(1, sizeof(device_properties_t));
-+
-+    /* Add a new entry to this result's devices list */
-+    CRM_ASSERT(props != NULL);
-+    g_hash_table_insert(result->devices, strdup(device), props);
-+
-+    /* Peers with verified (monitored) access will be preferred */
-+    crm_element_value_int(xml, F_STONITH_DEVICE_VERIFIED, &verified);
-+    if (verified) {
-+        crm_trace("Peer %s has confirmed a verified device %s",
-+                  result->host, device);
-+        props->verified = TRUE;
-+    }
-+
-+    /* Parse action-specific device properties */
-+    parse_action_specific(xml, result->host, device, op_requested_action(op),
-+                          op, st_phase_requested, props);
-+    for (child = __xml_first_child(xml); child != NULL; child = __xml_next(child)) \
                {
-+        /* Replies for "reboot" operations will include the action-specific
-+         * values for "off" and "on" in child elements, just in case the reboot
-+         * winds up getting remapped.
-+         */
-+        if (safe_str_eq(ID(child), "off")) {
-+            parse_action_specific(child, result->host, device, "off",
-+                                  op, st_phase_off, props);
-+        } else if (safe_str_eq(ID(child), "on")) {
-+            parse_action_specific(child, result->host, device, "on",
-+                                  op, st_phase_on, props);
-+        }
-+    }
-+}
-+
-+/*
-+ * \internal
-+ * \brief Parse a peer's XML query reply and add it to operation's results
-+ *
-+ * \param[in,out] op        Operation that query and reply relate to
-+ * \param[in]     host      Name of peer that sent this reply
-+ * \param[in]     ndevices  Number of devices expected in reply
-+ * \param[in]     xml       XML node containing device list
-+ *
-+ * \return Newly allocated result structure with parsed reply
-+ */
-+static st_query_result_t *
-+add_result(remote_fencing_op_t *op, const char *host, int ndevices, xmlNode *xml)
-+{
-+    st_query_result_t *result = calloc(1, sizeof(st_query_result_t));
-+    xmlNode *child;
-+
-+    CRM_CHECK(result != NULL, return NULL);
-+    result->host = strdup(host);
-+    result->devices = g_hash_table_new_full(crm_str_hash, g_str_equal, free, free);
-+
-+    /* Each child element describes one capable device available to the peer */
-+    for (child = __xml_first_child(xml); child != NULL; child = __xml_next(child)) \
                {
-+        const char *device = ID(child);
-+
-+        if (device) {
-+            add_device_properties(child, op, result, device);
-+        }
-+    }
-+
-+    result->ndevices = g_hash_table_size(result->devices);
-+    CRM_CHECK(ndevices == result->ndevices,
-+              crm_err("Query claimed to have %d devices but %d found",
-+                      ndevices, result->ndevices));
-+
-+    op->query_results = g_list_insert_sorted(op->query_results, result, \
                sort_peers);
-+    return result;
-+}
-+
-+/*
-+ * \internal
-+ * \brief Handle a peer's reply to our fencing query
-+ *
-+ * Parse a query result from XML and store it in the remote operation
-+ * table, and when enough replies have been received, issue a fencing request.
-+ *
-+ * \param[in] msg  XML reply received
-+ *
-+ * \return pcmk_ok on success, -errno on error
-+ *
-+ * \note See initiate_remote_stonith_op() for how the XML query was initially
-+ *       formed, and stonith_query() for how the peer formed its XML reply.
-+ */
- int
- process_remote_stonith_query(xmlNode * msg)
- {
--    int devices = 0;
-+    int ndevices = 0;
-     gboolean host_is_target = FALSE;
-     gboolean have_all_replies = FALSE;
-     const char *id = NULL;
-@@ -1259,7 +1768,6 @@ process_remote_stonith_query(xmlNode * msg)
-     st_query_result_t *result = NULL;
-     uint32_t replies_expected;
-     xmlNode *dev = get_xpath_object("//@" F_STONITH_REMOTE_OP_ID, msg, LOG_ERR);
--    xmlNode *child = NULL;
-
-     CRM_CHECK(dev != NULL, return -EPROTO);
-
-@@ -1268,7 +1776,7 @@ process_remote_stonith_query(xmlNode * msg)
-
-     dev = get_xpath_object("//@" F_STONITH_AVAILABLE_DEVICES, msg, LOG_ERR);
-     CRM_CHECK(dev != NULL, return -EPROTO);
--    crm_element_value_int(dev, F_STONITH_AVAILABLE_DEVICES, &devices);
-+    crm_element_value_int(dev, F_STONITH_AVAILABLE_DEVICES, &ndevices);
-
-     op = g_hash_table_lookup(remote_op_list, id);
-     if (op == NULL) {
-@@ -1283,75 +1791,13 @@ process_remote_stonith_query(xmlNode * msg)
-     host = crm_element_value(msg, F_ORIG);
-     host_is_target = safe_str_eq(host, op->target);
-
--    if (devices <= 0) {
--        /* If we're doing 'known' then we might need to fire anyway */
--        crm_trace("Query result %d of %d from %s for %s/%s (%d devices) %s",
--                  op->replies, replies_expected, host,
--                  op->target, op->action, devices, id);
--        if (have_all_replies) {
--            crm_info("All query replies have arrived, continuing (%d expected/%d \
                received for id %s)",
--                     replies_expected, op->replies, id);
--            call_remote_stonith(op, NULL);
--        }
--        return pcmk_ok;
--    }
--
-     crm_info("Query result %d of %d from %s for %s/%s (%d devices) %s",
-              op->replies, replies_expected, host,
--             op->target, op->action, devices, id);
--    result = calloc(1, sizeof(st_query_result_t));
--    result->host = strdup(host);
--    result->devices = devices;
--    result->custom_action_timeouts = g_hash_table_new_full(crm_str_hash, \
                g_str_equal, free, NULL);
--    result->delay_maxes = g_hash_table_new_full(crm_str_hash, g_str_equal, free, \
                NULL);
--    result->verified_devices = g_hash_table_new_full(crm_str_hash, g_str_equal, \
                free, NULL);
--
--    for (child = __xml_first_child(dev); child != NULL; child = __xml_next(child)) \
                {
--        const char *device = ID(child);
--        int action_timeout = 0;
--        int delay_max = 0;
--        int verified = 0;
--        int required = 0;
--
--        if (device) {
--            result->device_list = g_list_prepend(result->device_list, \
                strdup(device));
--            crm_element_value_int(child, F_STONITH_ACTION_TIMEOUT, \
                &action_timeout);
--            crm_element_value_int(child, F_STONITH_DELAY_MAX, &delay_max);
--            crm_element_value_int(child, F_STONITH_DEVICE_VERIFIED, &verified);
--            crm_element_value_int(child, F_STONITH_DEVICE_REQUIRED, &required);
--            if (action_timeout) {
--                crm_trace("Peer %s with device %s returned action timeout %d",
--                          result->host, device, action_timeout);
--                g_hash_table_insert(result->custom_action_timeouts,
--                                    strdup(device), \
                GINT_TO_POINTER(action_timeout));
--            }
--            if (delay_max > 0) {
--                crm_trace("Peer %s with device %s returned maximum of random delay \
                %d",
--                          result->host, device, delay_max);
--                g_hash_table_insert(result->delay_maxes,
--                                    strdup(device), GINT_TO_POINTER(delay_max));
--            }
--            if (verified) {
--                crm_trace("Peer %s has confirmed a verified device %s", \
                result->host, device);
--                g_hash_table_insert(result->verified_devices,
--                                    strdup(device), GINT_TO_POINTER(verified));
--            }
--            if (required) {
--                crm_trace("Peer %s requires device %s to execute for action %s",
--                          result->host, device, op->action);
--                /* This matters when executing a topology. Required devices will \
                get
--                 * executed regardless of their topology level. We use this for \
                unfencing. */
--                add_required_device(op, device);
--            }
--        }
-+             op->target, op->action, ndevices, id);
-+    if (ndevices > 0) {
-+        result = add_result(op, host, ndevices, dev);
-     }
-
--    CRM_CHECK(devices == g_list_length(result->device_list),
--              crm_err("Mis-match: Query claimed to have %d devices but %d found", \
                devices,
--                      g_list_length(result->device_list)));
--
--    op->query_results = g_list_insert_sorted(op->query_results, result, \
                sort_peers);
--
-     if (is_set(op->call_options, st_opt_topology)) {
-         /* If we start the fencing before all the topology results are in,
-          * it is possible fencing levels will be skipped because of the missing
-@@ -1368,11 +1814,13 @@ process_remote_stonith_query(xmlNode * msg)
-         }
-
-     } else if (op->state == st_query) {
-+        int nverified = count_peer_devices(op, result, TRUE);
-+
-         /* We have a result for a non-topology fencing op that looks promising,
-          * go ahead and start fencing before query timeout */
--        if (host_is_target == FALSE && g_hash_table_size(result->verified_devices)) \
                {
-+        if (result && (host_is_target == FALSE) && nverified) {
-             /* we have a verified device living on a peer that is not the target */
--            crm_trace("Found %d verified devices", \
                g_hash_table_size(result->verified_devices));
-+            crm_trace("Found %d verified devices", nverified);
-             call_remote_stonith(op, result);
-
-         } else if (have_all_replies) {
-@@ -1384,14 +1832,25 @@ process_remote_stonith_query(xmlNode * msg)
-             crm_trace("Waiting for more peer results before launching fencing \
                operation");
-         }
-
--    } else if (op->state == st_done) {
-+    } else if (result && (op->state == st_done)) {
-         crm_info("Discarding query result from %s (%d devices): Operation is in \
                state %d",
--                 result->host, result->devices, op->state);
-+                 result->host, result->ndevices, op->state);
-     }
-
-     return pcmk_ok;
- }
-
-+/*
-+ * \internal
-+ * \brief Handle a peer's reply to a fencing request
-+ *
-+ * Parse a fencing reply from XML, and either finalize the operation
-+ * or attempt another device as appropriate.
-+ *
-+ * \param[in] msg  XML reply received
-+ *
-+ * \return pcmk_ok on success, -errno on error
-+ */
- int
- process_remote_stonith_exec(xmlNode * msg)
- {
-@@ -1472,26 +1931,20 @@ process_remote_stonith_exec(xmlNode * msg)
-             return rc;
-         }
-
--        /* An operation completed succesfully but has not yet been marked as done.
--         * Continue the topology if more devices exist at the current level, \
                otherwise
--         * mark as done. */
-+        if ((op->phase == 2) && (rc != pcmk_ok)) {
-+            /* A remapped "on" failed, but the node was already turned off
-+             * successfully, so ignore the error and continue.
-+             */
-+            crm_warn("Ignoring %s 'on' failure (exit code %d) for %s after \
                successful 'off'",
-+                     device, rc, op->target);
-+            rc = pcmk_ok;
-+        }
-+
-         if (rc == pcmk_ok) {
--            GListPtr required_match = g_list_find_custom(op->required_list, device, \
                sort_strings);
--            if (op->devices) {
--                /* Success, are there any more? */
--                op->devices = op->devices->next;
--            }
--            if (required_match) {
--                op->required_list = g_list_remove(op->required_list, \
                required_match->data);
--            }
--            /* if no more devices at this fencing level, we are done,
--             * else we need to contine with executing the next device in the list \
                */
--            if (op->devices == NULL) {
--                crm_trace("Marking complex fencing op for %s as complete", \
                op->target);
--                op->state = st_done;
--                remote_op_done(op, msg, rc, FALSE);
--                return rc;
--            }
-+            /* An operation completed successfully. Try another device if
-+             * necessary, otherwise mark the operation as done. */
-+            advance_op_topology(op, device, msg, rc);
-+            return rc;
-         } else {
-             /* This device failed, time to try another topology level. If no other
-              * levels are available, mark this operation as failed and report \
                results. */
-@@ -1516,7 +1969,7 @@ process_remote_stonith_exec(xmlNode * msg)
-         /* fall-through and attempt other fencing action using another peer */
-     }
-
--    /* Retry on failure or execute the rest of the topology */
-+    /* Retry on failure */
-     crm_trace("Next for %s on behalf of %s@%s (rc was %d)", op->target, \
                op->originator,
-               op->client_name, rc);
-     call_remote_stonith(op, NULL);
-@@ -1595,6 +2048,9 @@ stonith_check_fence_tolerance(int tolerance, const char \
                *target, const char *act
-             continue;
-         } else if (rop->state != st_done) {
-             continue;
-+        /* We don't have to worry about remapped reboots here
-+         * because if state is done, any remapping has been undone
-+         */
-         } else if (strcmp(rop->action, action) != 0) {
-             continue;
-         } else if ((rop->completed + tolerance) < now) {
-diff --git a/include/crm/fencing/internal.h b/include/crm/fencing/internal.h
-index a6f58b1..a59151b 100644
---- a/include/crm/fencing/internal.h
-+++ b/include/crm/fencing/internal.h
-@@ -63,6 +63,8 @@ xmlNode *create_device_registration_xml(const char *id, const char \
                *namespace, c
- #  define F_STONITH_TOLERANCE     "st_tolerance"
- /*! Action specific timeout period returned in query of fencing devices. */
- #  define F_STONITH_ACTION_TIMEOUT       "st_action_timeout"
-+/*! Host in query result is not allowed to run this action */
-+#  define F_STONITH_ACTION_DISALLOWED     "st_action_disallowed"
- /*! Maximum of random fencing delay for a device */
- #  define F_STONITH_DELAY_MAX            "st_delay_max"
- /*! Has this device been verified using a monitor type
-diff --git a/include/crm/lrmd.h b/include/crm/lrmd.h
-index e3a0d63..730cad3 100644
---- a/include/crm/lrmd.h
-+++ b/include/crm/lrmd.h
-@@ -200,8 +200,6 @@ typedef struct lrmd_event_data_s {
-     enum ocf_exitcode rc;
-     /*! The lrmd status returned for exec_complete events */
-     int op_status;
--    /*! exit failure reason string from resource agent operation */
--    const char *exit_reason;
-     /*! stdout from resource agent operation */
-     const char *output;
-     /*! Timestamp of when op ran */
-@@ -226,6 +224,9 @@ typedef struct lrmd_event_data_s {
-      * to the proper client. */
-     const char *remote_nodename;
-
-+    /*! exit failure reason string from resource agent operation */
-+    const char *exit_reason;
-+
- } lrmd_event_data_t;
-
- lrmd_event_data_t *lrmd_copy_event(lrmd_event_data_t * event);
-diff --git a/include/crm/pengine/status.h b/include/crm/pengine/status.h
-index 4bfa3fe..4214959 100644
---- a/include/crm/pengine/status.h
-+++ b/include/crm/pengine/status.h
-@@ -137,10 +137,6 @@ struct node_shared_s {
-     gboolean shutdown;
-     gboolean expected_up;
-     gboolean is_dc;
--    gboolean rsc_discovery_enabled;
--
--    gboolean remote_requires_reset;
--    gboolean remote_was_fenced;
-
-     int num_resources;
-     GListPtr running_rsc;       /* resource_t* */
-@@ -157,14 +153,17 @@ struct node_shared_s {
-     GHashTable *digest_cache;
-
-     gboolean maintenance;
-+    gboolean rsc_discovery_enabled;
-+    gboolean remote_requires_reset;
-+    gboolean remote_was_fenced;
- };
-
- struct node_s {
-     int weight;
-     gboolean fixed;
--    int rsc_discover_mode;
-     int count;
-     struct node_shared_s *details;
-+    int rsc_discover_mode;
- };
-
- #  include <crm/pengine/complex.h>
-@@ -262,7 +261,6 @@ struct resource_s {
-     int migration_threshold;
-
-     gboolean is_remote_node;
--    gboolean exclusive_discover;
-
-     unsigned long long flags;
-
-@@ -296,6 +294,7 @@ struct resource_s {
-     char *pending_task;
-
-     const char *isolation_wrapper;
-+    gboolean exclusive_discover;
- };
-
- struct pe_action_s {
-diff --git a/lib/cib/cib_ops.c b/lib/cib/cib_ops.c
-index 5f73559..8966ae2 100644
---- a/lib/cib/cib_ops.c
-+++ b/lib/cib/cib_ops.c
-@@ -373,7 +373,10 @@ cib_process_modify(const char *op, int options, const char \
                *section, xmlNode * r
-
-         for (lpc = 0; lpc < max; lpc++) {
-             xmlNode *match = getXpathResult(xpathObj, lpc);
--            crm_debug("Destroying %s", (char *)xmlGetNodePath(match));
-+            xmlChar *match_path = xmlGetNodePath(match);
-+
-+            crm_debug("Destroying %s", match_path);
-+            free(match_path);
-             free_xml(match);
-         }
-
-diff --git a/lib/cib/cib_utils.c b/lib/cib/cib_utils.c
-index 28b8e81..d321517 100644
---- a/lib/cib/cib_utils.c
-+++ b/lib/cib/cib_utils.c
-@@ -533,7 +533,7 @@ cib_perform_op(const char *op, int call_options, cib_op_t * fn, \
                gboolean is_quer
-             int current_schema = get_schema_version(schema);
-
-             if (minimum_schema == 0) {
--                minimum_schema = get_schema_version("pacemaker-1.1");
-+                minimum_schema = get_schema_version("pacemaker-1.2");
-             }
-
-             /* Does the CIB support the "update-*" attributes... */
-diff --git a/lib/cluster/membership.c b/lib/cluster/membership.c
-index 28f41cb..b7958eb 100644
---- a/lib/cluster/membership.c
-+++ b/lib/cluster/membership.c
-@@ -734,6 +734,14 @@ crm_update_peer_proc(const char *source, crm_node_t * node, \
                uint32_t flag, const
-         if (crm_status_callback) {
-             crm_status_callback(crm_status_processes, node, &last);
-         }
-+
-+        /* The client callback shouldn't touch the peer caches,
-+         * but as a safety net, bail if the peer cache was destroyed.
-+         */
-+        if (crm_peer_cache == NULL) {
-+            return NULL;
-+        }
-+
-         if (crm_autoreap) {
-             node = crm_update_peer_state(__FUNCTION__, node,
-                                          is_set(node->processes, \
                crm_get_cluster_proc())?
-diff --git a/lib/common/Makefile.am b/lib/common/Makefile.am
-index f5c0766..a593f40 100644
---- a/lib/common/Makefile.am
-+++ b/lib/common/Makefile.am
-@@ -37,7 +37,7 @@ if BUILD_CIBSECRETS
- libcrmcommon_la_SOURCES	+= cib_secrets.c
- endif
-
--libcrmcommon_la_LDFLAGS	= -version-info 8:0:5
-+libcrmcommon_la_LDFLAGS	= -version-info 7:0:4
- libcrmcommon_la_LIBADD  = @LIBADD_DL@ $(GNUTLSLIBS)
- libcrmcommon_la_SOURCES += $(top_builddir)/lib/gnu/md5.c
-
-diff --git a/lib/common/xml.c b/lib/common/xml.c
-index e272049..8eed245 100644
---- a/lib/common/xml.c
-+++ b/lib/common/xml.c
-@@ -3430,12 +3430,18 @@ dump_xml_attr(xmlAttrPtr attr, int options, char **buffer, \
                int *offset, int *max
- {
-     char *p_value = NULL;
-     const char *p_name = NULL;
-+    xml_private_t *p = NULL;
-
-     CRM_ASSERT(buffer != NULL);
-     if (attr == NULL || attr->children == NULL) {
-         return;
-     }
-
-+    p = attr->_private;
-+    if (p && is_set(p->flags, xpf_deleted)) {
-+        return;
-+    }
-+
-     p_name = (const char *)attr->name;
-     p_value = crm_xml_escape((const char *)attr->children->content);
-     buffer_print(*buffer, *max, *offset, " %s=\"%s\"", p_name, p_value);
-@@ -3812,6 +3818,10 @@ dump_xml_comment(xmlNode * data, int options, char **buffer, \
                int *offset, int *m
- void
- crm_xml_dump(xmlNode * data, int options, char **buffer, int *offset, int *max, int \
                depth)
- {
-+    if(data == NULL) {
-+        *offset = 0;
-+        *max = 0;
-+    }
- #if 0
-     if (is_not_set(options, xml_log_option_filtered)) {
-         /* Turning this code on also changes the PE tests for some reason
-@@ -4564,6 +4574,8 @@ subtract_xml_object(xmlNode * parent, xmlNode * left, xmlNode \
                * right,
-     /* changes to name/value pairs */
-     for (xIter = crm_first_attr(left); xIter != NULL; xIter = xIter->next) {
-         const char *prop_name = (const char *)xIter->name;
-+        xmlAttrPtr right_attr = NULL;
-+        xml_private_t *p = NULL;
-
-         if (strcmp(prop_name, XML_ATTR_ID) == 0) {
-             continue;
-@@ -4582,8 +4594,13 @@ subtract_xml_object(xmlNode * parent, xmlNode * left, xmlNode \
                * right,
-             continue;
-         }
-
-+        right_attr = xmlHasProp(right, (const xmlChar *)prop_name);
-+        if (right_attr) {
-+            p = right_attr->_private;
-+        }
-+
-         right_val = crm_element_value(right, prop_name);
--        if (right_val == NULL) {
-+        if (right_val == NULL || (p && is_set(p->flags, xpf_deleted))) {
-             /* new */
-             *changed = TRUE;
-             if (full) {
-diff --git a/lib/fencing/st_client.c b/lib/fencing/st_client.c
-index 80f0064..67114c2 100644
---- a/lib/fencing/st_client.c
-+++ b/lib/fencing/st_client.c
-@@ -1100,57 +1100,62 @@ stonith_api_device_metadata(stonith_t * stonith, int \
                call_options, const char *a
-     if (safe_str_eq(provider, "redhat")) {
-         stonith_action_t *action = stonith_action_create(agent, "metadata", NULL, \
                0, 5, NULL, NULL);
-         int exec_rc = stonith_action_execute(action, &rc, &buffer);
-+        xmlNode *xml = NULL;
-+        xmlNode *actions = NULL;
-+        xmlXPathObject *xpathObj = NULL;
-
-         if (exec_rc < 0 || rc != 0 || buffer == NULL) {
-+            crm_warn("Could not obtain metadata for %s", agent);
-             crm_debug("Query failed: %d %d: %s", exec_rc, rc, crm_str(buffer));
-             free(buffer);       /* Just in case */
-             return -EINVAL;
-+        }
-
--        } else {
--
--            xmlNode *xml = string2xml(buffer);
--            xmlNode *actions = NULL;
--            xmlXPathObject *xpathObj = NULL;
-+        xml = string2xml(buffer);
-+        if(xml == NULL) {
-+            crm_warn("Metadata for %s is invalid", agent);
-+            free(buffer);
-+            return -EINVAL;
-+        }
-
--            xpathObj = xpath_search(xml, "//actions");
--            if (numXpathResults(xpathObj) > 0) {
--                actions = getXpathResult(xpathObj, 0);
--            }
-+        xpathObj = xpath_search(xml, "//actions");
-+        if (numXpathResults(xpathObj) > 0) {
-+            actions = getXpathResult(xpathObj, 0);
-+        }
-
--            freeXpathObject(xpathObj);
-+        freeXpathObject(xpathObj);
-
--            /* Now fudge the metadata so that the start/stop actions appear */
--            xpathObj = xpath_search(xml, "//action[@name='stop']");
--            if (numXpathResults(xpathObj) <= 0) {
--                xmlNode *tmp = NULL;
-+        /* Now fudge the metadata so that the start/stop actions appear */
-+        xpathObj = xpath_search(xml, "//action[@name='stop']");
-+        if (numXpathResults(xpathObj) <= 0) {
-+            xmlNode *tmp = NULL;
-
--                tmp = create_xml_node(actions, "action");
--                crm_xml_add(tmp, "name", "stop");
--                crm_xml_add(tmp, "timeout", "20s");
-+            tmp = create_xml_node(actions, "action");
-+            crm_xml_add(tmp, "name", "stop");
-+            crm_xml_add(tmp, "timeout", "20s");
-
--                tmp = create_xml_node(actions, "action");
--                crm_xml_add(tmp, "name", "start");
--                crm_xml_add(tmp, "timeout", "20s");
--            }
-+            tmp = create_xml_node(actions, "action");
-+            crm_xml_add(tmp, "name", "start");
-+            crm_xml_add(tmp, "timeout", "20s");
-+        }
-
--            freeXpathObject(xpathObj);
-+        freeXpathObject(xpathObj);
-
--            /* Now fudge the metadata so that the port isn't required in the \
                configuration */
--            xpathObj = xpath_search(xml, "//parameter[@name='port']");
--            if (numXpathResults(xpathObj) > 0) {
--                /* We'll fill this in */
--                xmlNode *tmp = getXpathResult(xpathObj, 0);
-+        /* Now fudge the metadata so that the port isn't required in the \
                configuration */
-+        xpathObj = xpath_search(xml, "//parameter[@name='port']");
-+        if (numXpathResults(xpathObj) > 0) {
-+            /* We'll fill this in */
-+            xmlNode *tmp = getXpathResult(xpathObj, 0);
-
--                crm_xml_add(tmp, "required", "0");
--            }
-+            crm_xml_add(tmp, "required", "0");
-+        }
-
--            freeXpathObject(xpathObj);
--            free(buffer);
--            buffer = dump_xml_formatted(xml);
--            free_xml(xml);
--            if (!buffer) {
--                return -EINVAL;
--            }
-+        freeXpathObject(xpathObj);
-+        free(buffer);
-+        buffer = dump_xml_formatted(xml);
-+        free_xml(xml);
-+        if (!buffer) {
-+            return -EINVAL;
-         }
-
-     } else {
-@@ -1280,7 +1285,10 @@ stonith_api_query(stonith_t * stonith, int call_options, \
                const char *target,
-
-             CRM_LOG_ASSERT(match != NULL);
-             if(match != NULL) {
--                crm_info("%s[%d] = %s", "//@agent", lpc, xmlGetNodePath(match));
-+                xmlChar *match_path = xmlGetNodePath(match);
-+
-+                crm_info("%s[%d] = %s", "//@agent", lpc, match_path);
-+                free(match_path);
-                 *devices = stonith_key_value_add(*devices, NULL, \
                crm_element_value(match, XML_ATTR_ID));
-             }
-         }
-diff --git a/lib/lrmd/Makefile.am b/lib/lrmd/Makefile.am
-index e98d1e5..f961ae1 100644
---- a/lib/lrmd/Makefile.am
-+++ b/lib/lrmd/Makefile.am
-@@ -25,7 +25,7 @@ AM_CPPFLAGS         = -I$(top_builddir)/include  \
                -I$(top_srcdir)/include     \
- lib_LTLIBRARIES = liblrmd.la
-
- liblrmd_la_SOURCES = lrmd_client.c proxy_common.c
--liblrmd_la_LDFLAGS = -version-info 3:0:0
-+liblrmd_la_LDFLAGS = -version-info 3:0:2
- liblrmd_la_LIBADD = $(top_builddir)/lib/common/libcrmcommon.la	\
- 			$(top_builddir)/lib/services/libcrmservice.la \
- 			$(top_builddir)/lib/fencing/libstonithd.la
-diff --git a/lib/pengine/Makefile.am b/lib/pengine/Makefile.am
-index 29b7206..78da075 100644
---- a/lib/pengine/Makefile.am
-+++ b/lib/pengine/Makefile.am
-@@ -30,7 +30,7 @@ libpe_rules_la_LDFLAGS	= -version-info 2:4:0
- libpe_rules_la_SOURCES	= rules.c common.c
- libpe_rules_la_LIBADD	= $(top_builddir)/lib/common/libcrmcommon.la
-
--libpe_status_la_LDFLAGS	= -version-info 8:0:0
-+libpe_status_la_LDFLAGS	= -version-info 8:0:4
- libpe_status_la_SOURCES	=  status.c unpack.c utils.c complex.c native.c group.c \
                clone.c rules.c common.c
- libpe_status_la_LIBADD	=  @CURSESLIBS@ $(top_builddir)/lib/common/libcrmcommon.la
-
-diff --git a/lib/pengine/unpack.c b/lib/pengine/unpack.c
-index 73c44a8..106c674 100644
---- a/lib/pengine/unpack.c
-+++ b/lib/pengine/unpack.c
-@@ -2834,8 +2834,9 @@ static bool check_operation_expiry(resource_t *rsc, node_t \
                *node, int rc, xmlNod
-
-             node_t *remote_node = pe_find_node(data_set->nodes, rsc->id);
-             if (remote_node && remote_node->details->remote_was_fenced == 0) {
--
--                crm_info("Waiting to clear monitor failure for remote node %s until \
                fencing has occured", rsc->id);
-+                if (strstr(ID(xml_op), "last_failure")) {
-+                    crm_info("Waiting to clear monitor failure for remote node %s \
                until fencing has occured", rsc->id);
-+                }
-                 /* disabling failure timeout for this operation because we believe
-                  * fencing of the remote node should occur first. */
-                 failure_timeout = 0;
-@@ -2866,6 +2867,9 @@ static bool check_operation_expiry(resource_t *rsc, node_t \
                *node, int rc, xmlNod
-                 } else {
-                     expired = FALSE;
-                 }
-+            } else if (rsc->remote_reconnect_interval && strstr(ID(xml_op), \
                "last_failure")) {
-+                /* always clear last failure when reconnect interval is set */
-+                clear_failcount = 1;
-             }
-         }
-
-diff --git a/lib/services/pcmk-dbus.h b/lib/services/pcmk-dbus.h
-index afb8a2a..b9a713b 100644
---- a/lib/services/pcmk-dbus.h
-+++ b/lib/services/pcmk-dbus.h
-@@ -1,3 +1,7 @@
-+#ifndef DBUS_TIMEOUT_USE_DEFAULT
-+#  define DBUS_TIMEOUT_USE_DEFAULT -1
-+#endif
-+
- DBusConnection *pcmk_dbus_connect(void);
- void pcmk_dbus_connection_setup_with_select(DBusConnection *c);
- void pcmk_dbus_disconnect(DBusConnection *connection);
-diff --git a/lrmd/lrmd.c b/lrmd/lrmd.c
-index bd4d33e..0cf98cc 100644
---- a/lrmd/lrmd.c
-+++ b/lrmd/lrmd.c
-@@ -219,6 +219,7 @@ free_lrmd_cmd(lrmd_cmd_t * cmd)
-     }
-     free(cmd->origin);
-     free(cmd->action);
-+    free(cmd->real_action);
-     free(cmd->userdata_str);
-     free(cmd->rsc_id);
-     free(cmd->output);
-diff --git a/pacemaker.spec.in b/pacemaker.spec.in
-index 0e3200f..2dfb4a6 100644
---- a/pacemaker.spec.in
-+++ b/pacemaker.spec.in
-@@ -54,7 +54,7 @@
-
- Name:          pacemaker
- Summary:       Scalable High-Availability cluster resource manager
--Version:       1.1.11
-+Version:       1.1.13
- Release:       %{pcmk_release}%{?dist}
- License:       GPLv2+ and LGPLv2+
- Url:           http://www.clusterlabs.org
-diff --git a/pengine/Makefile.am b/pengine/Makefile.am
-index d14d911..31532cf 100644
---- a/pengine/Makefile.am
-+++ b/pengine/Makefile.am
-@@ -61,7 +61,7 @@ endif
- noinst_HEADERS	= allocate.h utils.h pengine.h
- #utils.h pengine.h
-
--libpengine_la_LDFLAGS	= -version-info 8:0:0
-+libpengine_la_LDFLAGS	= -version-info 8:0:4
- # -L$(top_builddir)/lib/pils -lpils -export-dynamic -module -avoid-version
- libpengine_la_SOURCES	= pengine.c allocate.c utils.c constraints.c
- libpengine_la_SOURCES  += native.c group.c clone.c master.c graph.c utilization.c
-diff --git a/pengine/allocate.c b/pengine/allocate.c
-index 4b6fca1..68cafd4 100644
---- a/pengine/allocate.c
-+++ b/pengine/allocate.c
-@@ -1681,10 +1681,38 @@ apply_remote_node_ordering(pe_working_set_t *data_set)
-         resource_t *remote_rsc = NULL;
-         resource_t *container = NULL;
-
-+        if (action->rsc == NULL) {
-+            continue;
-+        }
-+
-+        /* Special case. */
-+        if (action->rsc &&
-+            action->rsc->is_remote_node &&
-+            safe_str_eq(action->task, CRM_OP_CLEAR_FAILCOUNT)) {
-+
-+            /* if we are clearing the failcount of an actual remote node connect
-+             * resource, then make sure this happens before allowing the connection
-+             * to start if we are planning on starting the connection during this
-+             * transition */
-+            custom_action_order(action->rsc,
-+                NULL,
-+                action,
-+                action->rsc,
-+                generate_op_key(action->rsc->id, RSC_START, 0),
-+                NULL,
-+                pe_order_optional,
-+                data_set);
-+
-+                continue;
-+        }
-+
-+        /* detect if the action occurs on a remote node. if so create
-+         * ordering constraints that guarantee the action occurs while
-+         * the remote node is active (after start, before stop...) things
-+         * like that */
-         if (action->node == NULL ||
-             is_remote_node(action->node) == FALSE ||
-             action->node->details->remote_rsc == NULL ||
--            action->rsc == NULL ||
-             is_set(action->flags, pe_action_pseudo)) {
-             continue;
-         }
-diff --git a/pengine/regression.sh b/pengine/regression.sh
-index d57da17..d184798 100755
---- a/pengine/regression.sh
-+++ b/pengine/regression.sh
-@@ -566,6 +566,8 @@ do_test colocated-utilization-primitive-2 "Colocated Utilization \
                - Choose the mo
- do_test colocated-utilization-group "Colocated Utilization - Group"
- do_test colocated-utilization-clone "Colocated Utilization - Clone"
-
-+do_test utilization-check-allowed-nodes "Only check the capacities of the nodes \
                that can run the resource"
-+
- echo ""
- do_test reprobe-target_rc "Ensure correct target_rc for reprobe of inactive \
                resources"
- do_test node-maintenance-1 "cl#5128 - Node maintenance"
-diff --git a/pengine/test10/utilization-check-allowed-nodes.dot \
                b/pengine/test10/utilization-check-allowed-nodes.dot
-new file mode 100644
-index 0000000..d09efbc
---- /dev/null
-+++ b/pengine/test10/utilization-check-allowed-nodes.dot
-@@ -0,0 +1,19 @@
-+digraph "g" {
-+"load_stopped_node1 node1" [ style=bold color="green" fontcolor="orange"]
-+"load_stopped_node2 node2" [ style=bold color="green" fontcolor="orange"]
-+"probe_complete node1" -> "probe_complete" [ style = bold]
-+"probe_complete node1" [ style=bold color="green" fontcolor="black"]
-+"probe_complete node2" -> "probe_complete" [ style = bold]
-+"probe_complete node2" [ style=bold color="green" fontcolor="black"]
-+"probe_complete" -> "rsc1_start_0 node2" [ style = bold]
-+"probe_complete" [ style=bold color="green" fontcolor="orange"]
-+"rsc1_monitor_0 node1" -> "probe_complete node1" [ style = bold]
-+"rsc1_monitor_0 node1" [ style=bold color="green" fontcolor="black"]
-+"rsc1_monitor_0 node2" -> "probe_complete node2" [ style = bold]
-+"rsc1_monitor_0 node2" [ style=bold color="green" fontcolor="black"]
-+"rsc1_start_0 node2" [ style=bold color="green" fontcolor="black"]
-+"rsc2_monitor_0 node1" -> "probe_complete node1" [ style = bold]
-+"rsc2_monitor_0 node1" [ style=bold color="green" fontcolor="black"]
-+"rsc2_monitor_0 node2" -> "probe_complete node2" [ style = bold]
-+"rsc2_monitor_0 node2" [ style=bold color="green" fontcolor="black"]
-+}
-diff --git a/pengine/test10/utilization-check-allowed-nodes.exp \
                b/pengine/test10/utilization-check-allowed-nodes.exp
-new file mode 100644
-index 0000000..134ccb3
---- /dev/null
-+++ b/pengine/test10/utilization-check-allowed-nodes.exp
-@@ -0,0 +1,112 @@
-+<transition_graph cluster-delay="60s" stonith-timeout="60s" \
                failed-stop-offset="INFINITY" failed-start-offset="INFINITY"  \
                transition_id="0">
-+  <synapse id="0">
-+    <action_set>
-+      <rsc_op id="11" operation="start" operation_key="rsc1_start_0" \
                on_node="node2" on_node_uuid="node2">
-+        <primitive id="rsc1" class="ocf" provider="pacemaker" type="Dummy"/>
-+        <attributes CRM_meta_timeout="20000" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <pseudo_event id="4" operation="probe_complete" \
                operation_key="probe_complete"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="1">
-+    <action_set>
-+      <rsc_op id="9" operation="monitor" operation_key="rsc1_monitor_0" \
                on_node="node2" on_node_uuid="node2">
-+        <primitive id="rsc1" class="ocf" provider="pacemaker" type="Dummy"/>
-+        <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs/>
-+  </synapse>
-+  <synapse id="2">
-+    <action_set>
-+      <rsc_op id="6" operation="monitor" operation_key="rsc1_monitor_0" \
                on_node="node1" on_node_uuid="node1">
-+        <primitive id="rsc1" class="ocf" provider="pacemaker" type="Dummy"/>
-+        <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs/>
-+  </synapse>
-+  <synapse id="3">
-+    <action_set>
-+      <rsc_op id="10" operation="monitor" operation_key="rsc2_monitor_0" \
                on_node="node2" on_node_uuid="node2">
-+        <primitive id="rsc2" class="ocf" provider="pacemaker" type="Dummy"/>
-+        <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs/>
-+  </synapse>
-+  <synapse id="4">
-+    <action_set>
-+      <rsc_op id="7" operation="monitor" operation_key="rsc2_monitor_0" \
                on_node="node1" on_node_uuid="node1">
-+        <primitive id="rsc2" class="ocf" provider="pacemaker" type="Dummy"/>
-+        <attributes CRM_meta_op_target_rc="7" CRM_meta_timeout="20000" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs/>
-+  </synapse>
-+  <synapse id="5" priority="1000000">
-+    <action_set>
-+      <rsc_op id="8" operation="probe_complete" \
                operation_key="probe_complete-node2" on_node="node2" \
                on_node_uuid="node2">
-+        <attributes CRM_meta_op_no_wait="true" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="9" operation="monitor" operation_key="rsc1_monitor_0" \
                on_node="node2" on_node_uuid="node2"/>
-+      </trigger>
-+      <trigger>
-+        <rsc_op id="10" operation="monitor" operation_key="rsc2_monitor_0" \
                on_node="node2" on_node_uuid="node2"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="6" priority="1000000">
-+    <action_set>
-+      <rsc_op id="5" operation="probe_complete" \
                operation_key="probe_complete-node1" on_node="node1" \
                on_node_uuid="node1">
-+        <attributes CRM_meta_op_no_wait="true" />
-+      </rsc_op>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="6" operation="monitor" operation_key="rsc1_monitor_0" \
                on_node="node1" on_node_uuid="node1"/>
-+      </trigger>
-+      <trigger>
-+        <rsc_op id="7" operation="monitor" operation_key="rsc2_monitor_0" \
                on_node="node1" on_node_uuid="node1"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="7">
-+    <action_set>
-+      <pseudo_event id="4" operation="probe_complete" \
                operation_key="probe_complete">
-+        <attributes />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs>
-+      <trigger>
-+        <rsc_op id="5" operation="probe_complete" \
                operation_key="probe_complete-node1" on_node="node1" \
                on_node_uuid="node1"/>
-+      </trigger>
-+      <trigger>
-+        <rsc_op id="8" operation="probe_complete" \
                operation_key="probe_complete-node2" on_node="node2" \
                on_node_uuid="node2"/>
-+      </trigger>
-+    </inputs>
-+  </synapse>
-+  <synapse id="8">
-+    <action_set>
-+      <pseudo_event id="3" operation="load_stopped_node1" \
                operation_key="load_stopped_node1">
-+        <attributes />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs/>
-+  </synapse>
-+  <synapse id="9">
-+    <action_set>
-+      <pseudo_event id="2" operation="load_stopped_node2" \
                operation_key="load_stopped_node2">
-+        <attributes />
-+      </pseudo_event>
-+    </action_set>
-+    <inputs/>
-+  </synapse>
-+</transition_graph>
-diff --git a/pengine/test10/utilization-check-allowed-nodes.scores \
                b/pengine/test10/utilization-check-allowed-nodes.scores
-new file mode 100644
-index 0000000..26887e2
---- /dev/null
-+++ b/pengine/test10/utilization-check-allowed-nodes.scores
-@@ -0,0 +1,5 @@
-+Allocation scores:
-+native_color: rsc1 allocation score on node1: -INFINITY
-+native_color: rsc1 allocation score on node2: 0
-+native_color: rsc2 allocation score on node1: -INFINITY
-+native_color: rsc2 allocation score on node2: 0
-diff --git a/pengine/test10/utilization-check-allowed-nodes.summary \
                b/pengine/test10/utilization-check-allowed-nodes.summary
-new file mode 100644
-index 0000000..12bf19a
---- /dev/null
-+++ b/pengine/test10/utilization-check-allowed-nodes.summary
-@@ -0,0 +1,26 @@
-+
-+Current cluster status:
-+Online: [ node1 node2 ]
-+
-+ rsc1	(ocf::pacemaker:Dummy):	Stopped
-+ rsc2	(ocf::pacemaker:Dummy):	Stopped
-+
-+Transition Summary:
-+ * Start   rsc1	(node2)
-+
-+Executing cluster transition:
-+ * Resource action: rsc1            monitor on node2
-+ * Resource action: rsc1            monitor on node1
-+ * Resource action: rsc2            monitor on node2
-+ * Resource action: rsc2            monitor on node1
-+ * Pseudo action:   probe_complete
-+ * Pseudo action:   load_stopped_node1
-+ * Pseudo action:   load_stopped_node2
-+ * Resource action: rsc1            start on node2
-+
-+Revised cluster status:
-+Online: [ node1 node2 ]
-+
-+ rsc1	(ocf::pacemaker:Dummy):	Started node2
-+ rsc2	(ocf::pacemaker:Dummy):	Stopped
-+
-diff --git a/pengine/test10/utilization-check-allowed-nodes.xml \
                b/pengine/test10/utilization-check-allowed-nodes.xml
-new file mode 100644
-index 0000000..39cf51f
---- /dev/null
-+++ b/pengine/test10/utilization-check-allowed-nodes.xml
-@@ -0,0 +1,39 @@
-+<cib epoch="1" num_updates="36" admin_epoch="0" validate-with="pacemaker-1.2" \
                cib-last-written="Fri Dec  7 15:42:31 2012" have-quorum="1">
-+  <configuration>
-+    <crm_config>
-+      <cluster_property_set id="cib-bootstrap-options">
-+        <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" \
                value="false"/>
-+        <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" \
                value="ignore"/>
-+        <nvpair id="cib-bootstrap-options-placement-strategy" \
                name="placement-strategy" value="utilization"/>
-+      </cluster_property_set>
-+    </crm_config>
-+    <nodes>
-+      <node id="node1" uname="node1">
-+        <utilization id="node1-utlization">
-+          <nvpair id="node1-utlization-cpu" name="cpu" value="4"/>
-+        </utilization>
-+      </node>
-+      <node id="node2" uname="node2">
-+        <utilization id="node2-utlization">
-+          <nvpair id="node2-utlization-cpu" name="cpu" value="2"/>
-+        </utilization>
-+      </node>
-+    </nodes>
-+    <resources>
-+      <primitive id="rsc1" class="ocf" provider="pacemaker" type="Dummy"/>
-+      <primitive id="rsc2" class="ocf" provider="pacemaker" type="Dummy">
-+        <utilization id="rsc2-utlization">
-+          <nvpair id="rsc2-utlization-cpu" name="cpu" value="4"/>
-+        </utilization>
-+      </primitive>
-+    </resources>
-+    <constraints>
-+      <rsc_location id="rsc1-location" rsc="rsc1" node="node1" score="-INFINITY"/>
-+      <rsc_colocation id="rsc2-with-rsc1" rsc="rsc2" with-rsc="rsc1" \
                score="INFINITY"/>
-+    </constraints>
-+  </configuration>
-+  <status>
-+    <node_state id="node1" uname="node1" in_ccm="true" crmd="online" join="member" \
                expected="member" crm-debug-origin="crm_simulate"/>
-+    <node_state id="node2" uname="node2" in_ccm="true" crmd="online" join="member" \
                expected="member" crm-debug-origin="crm_simulate"/>
-+  </status>
-+</cib>
-diff --git a/pengine/utilization.c b/pengine/utilization.c
-index 982fcc9..db41b21 100644
---- a/pengine/utilization.c
-+++ b/pengine/utilization.c
-@@ -344,9 +344,10 @@ process_utilization(resource_t * rsc, node_t ** prefer, \
                pe_working_set_t * data_
-     int alloc_details = scores_log_level + 1;
-
-     if (safe_str_neq(data_set->placement_strategy, "default")) {
--        GListPtr gIter = NULL;
-+        GHashTableIter iter;
-         GListPtr colocated_rscs = NULL;
-         gboolean any_capable = FALSE;
-+        node_t *node = NULL;
-
-         colocated_rscs = find_colocated_rscs(colocated_rscs, rsc, rsc);
-         if (colocated_rscs) {
-@@ -356,8 +357,11 @@ process_utilization(resource_t * rsc, node_t ** prefer, \
                pe_working_set_t * data_
-
-             unallocated_utilization = sum_unallocated_utilization(rsc, \
                colocated_rscs);
-
--            for (gIter = data_set->nodes; gIter != NULL; gIter = gIter->next) {
--                node_t *node = (node_t *) gIter->data;
-+            g_hash_table_iter_init(&iter, rsc->allowed_nodes);
-+            while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) {
-+                if (can_run_resources(node) == FALSE || node->weight < 0) {
-+                    continue;
-+                }
-
-                 if (have_enough_capacity(node, rscs_id, unallocated_utilization)) {
-                     any_capable = TRUE;
-@@ -371,8 +375,11 @@ process_utilization(resource_t * rsc, node_t ** prefer, \
                pe_working_set_t * data_
-             }
-
-             if (any_capable) {
--                for (gIter = data_set->nodes; gIter != NULL; gIter = gIter->next) {
--                    node_t *node = (node_t *) gIter->data;
-+                g_hash_table_iter_init(&iter, rsc->allowed_nodes);
-+                while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) {
-+                    if (can_run_resources(node) == FALSE || node->weight < 0) {
-+                        continue;
-+                    }
-
-                     if (have_enough_capacity(node, rscs_id, \
                unallocated_utilization) == FALSE) {
-                         pe_rsc_debug(rsc, "Resource %s and its colocated resources \
                cannot be allocated to node %s: no enough capacity",
-@@ -394,8 +401,11 @@ process_utilization(resource_t * rsc, node_t ** prefer, \
                pe_working_set_t * data_
-         }
-
-         if (any_capable == FALSE) {
--            for (gIter = data_set->nodes; gIter != NULL; gIter = gIter->next) {
--                node_t *node = (node_t *) gIter->data;
-+            g_hash_table_iter_init(&iter, rsc->allowed_nodes);
-+            while (g_hash_table_iter_next(&iter, NULL, (void **)&node)) {
-+                if (can_run_resources(node) == FALSE || node->weight < 0) {
-+                    continue;
-+                }
-
-                 if (have_enough_capacity(node, rsc->id, rsc->utilization) == FALSE) \
                {
-                     pe_rsc_debug(rsc, "Resource %s cannot be allocated to node %s: \
                no enough capacity",
-diff --git a/tools/fake_transition.c b/tools/fake_transition.c
-index e8c37f7..fe5de95 100644
---- a/tools/fake_transition.c
-+++ b/tools/fake_transition.c
-@@ -65,11 +65,14 @@ inject_transient_attr(xmlNode * cib_node, const char *name, \
                const char *value)
-     xmlNode *attrs = NULL;
-     xmlNode *container = NULL;
-     xmlNode *nvp = NULL;
-+    xmlChar *node_path;
-     const char *node_uuid = ID(cib_node);
-     char *nvp_id = crm_concat(name, node_uuid, '-');
-
--    quiet_log("Injecting attribute %s=%s into %s '%s'", name, value, \
                xmlGetNodePath(cib_node),
-+    node_path = xmlGetNodePath(cib_node);
-+    quiet_log("Injecting attribute %s=%s into %s '%s'", name, value, node_path,
-              ID(cib_node));
-+    free(node_path);
-
-     attrs = first_named_child(cib_node, XML_TAG_TRANSIENT_NODEATTRS);
-     if (attrs == NULL) {
-diff --git a/valgrind-pcmk.suppressions b/valgrind-pcmk.suppressions
-index e7caa55..2e382df 100644
---- a/valgrind-pcmk.suppressions
-+++ b/valgrind-pcmk.suppressions
-@@ -20,6 +20,15 @@
- }
-
- {
-+   Another bash leak
-+   Memcheck:Leak
-+   fun:malloc
-+   fun:xmalloc
-+   fun:set_default_locale
-+   fun:main
-+}
-+
-+{
-    Ignore option parsing
-    Memcheck:Leak
-    fun:realloc
-@@ -294,4 +303,4 @@
-    obj:*/libgobject-*
-    fun:call_init.part.0
-    fun:_dl_init
--}
-\ No newline at end of file
-+}
-diff --git a/version.m4 b/version.m4
-index 22faf65..3d5e96b 100644
---- a/version.m4
-+++ b/version.m4
-@@ -1 +1 @@
--m4_define([VERSION_NUMBER], [1.1.12])
-+m4_define([VERSION_NUMBER], [1.1.13])
diff --git a/pacemaker-rollup-3a7715d.patch b/pacemaker-rollup-3a7715d.patch
deleted file mode 100644
index 6b1935c..0000000
--- a/pacemaker-rollup-3a7715d.patch
+++ /dev/null
@@ -1,4919 +0,0 @@
-diff --git a/attrd/commands.c b/attrd/commands.c
-index 18c0523..c6586c7 100644
---- a/attrd/commands.c
-+++ b/attrd/commands.c
-@@ -832,7 +832,6 @@ attrd_cib_callback(xmlNode * msg, int call_id, int rc, xmlNode * \
                output, void *u
-         }
-     }
-   done:
--    free(name);
-     if(a && a->changed && election_state(writer) == election_won) {
-         write_attribute(a);
-     }
-@@ -1019,8 +1018,10 @@ write_attribute(attribute_t *a)
-         crm_info("Sent update %d with %d changes for %s, id=%s, set=%s",
-                  a->update, cib_updates, a->id, (a->uuid? a->uuid : "<n/a>"), \
                a->set);
-
--        the_cib->cmds->register_callback(
--            the_cib, a->update, 120, FALSE, strdup(a->id), "attrd_cib_callback", \
                attrd_cib_callback);
-+        the_cib->cmds->register_callback_full(the_cib, a->update, 120, FALSE,
-+                                              strdup(a->id),
-+                                              "attrd_cib_callback",
-+                                              attrd_cib_callback, free);
-     }
-     free_xml(xml_top);
- }
-diff --git a/attrd/legacy.c b/attrd/legacy.c
-index 4aae4c4..8a18c38 100644
---- a/attrd/legacy.c
-+++ b/attrd/legacy.c
-@@ -635,6 +635,20 @@ struct attrd_callback_s {
-     char *value;
- };
-
-+/*
-+ * \internal
-+ * \brief Free an attrd callback structure
-+ */
-+static void
-+free_attrd_callback(void *user_data)
-+{
-+    struct attrd_callback_s *data = user_data;
-+
-+    free(data->attr);
-+    free(data->value);
-+    free(data);
-+}
-+
- static void
- attrd_cib_callback(xmlNode * msg, int call_id, int rc, xmlNode * output, void \
                *user_data)
- {
-@@ -646,7 +660,7 @@ attrd_cib_callback(xmlNode * msg, int call_id, int rc, xmlNode * \
                output, void *u
-
-     } else if (call_id < 0) {
-         crm_warn("Update %s=%s failed: %s", data->attr, data->value, \
                pcmk_strerror(call_id));
--        goto cleanup;
-+        return;
-     }
-
-     switch (rc) {
-@@ -674,10 +688,6 @@ attrd_cib_callback(xmlNode * msg, int call_id, int rc, xmlNode \
                * output, void *u
-             crm_err("Update %d for %s=%s failed: %s",
-                     call_id, data->attr, data->value, pcmk_strerror(rc));
-     }
--  cleanup:
--    free(data->value);
--    free(data->attr);
--    free(data);
- }
-
- void
-@@ -749,8 +759,10 @@ attrd_perform_update(attr_hash_entry_t * hash_entry)
-     if (hash_entry->value != NULL) {
-         data->value = strdup(hash_entry->value);
-     }
--    cib_conn->cmds->register_callback(cib_conn, rc, 120, FALSE, data, \
                "attrd_cib_callback",
--                                      attrd_cib_callback);
-+    cib_conn->cmds->register_callback_full(cib_conn, rc, 120, FALSE, data,
-+                                           "attrd_cib_callback",
-+                                           attrd_cib_callback,
-+                                           free_attrd_callback);
-     return;
- }
-
-diff --git a/bumplibs.sh b/bumplibs.sh
-index 68f2f58..2044efa 100755
---- a/bumplibs.sh
-+++ b/bumplibs.sh
-@@ -3,6 +3,7 @@
- declare -A headers
- headers[crmcommon]="include/crm/common include/crm/crm.h"
- headers[crmcluster]="include/crm/cluster.h"
-+headers[crmservice]="include/crm/services.h"
- headers[transitioner]="include/crm/transition.h"
- headers[cib]="include/crm/cib.h include/crm/cib/util.h"
- headers[pe_rules]="include/crm/pengine/rules.h"
-@@ -11,8 +12,17 @@ headers[pengine]="include/crm/pengine/common.h  \
                include/crm/pengine/complex.h  i
- headers[stonithd]="include/crm/stonith-ng.h"
- headers[lrmd]="include/crm/lrmd.h"
-
--LAST_RELEASE=`test -e /Volumes || git tag -l | grep Pacemaker | grep -v rc | sort \
                -Vr | head -n 1`
--for lib in crmcommon crmcluster transitioner cib pe_rules pe_status stonithd \
                pengine lrmd; do
-+if [ ! -z $1 ]; then
-+    LAST_RELEASE=$1
-+else
-+    LAST_RELEASE=`test -e /Volumes || git tag -l | grep Pacemaker | grep -v rc | \
                sort -Vr | head -n 1`
-+fi
-+libs=$(find . -name "*.am" -exec grep "lib.*_la_LDFLAGS.*version-info"  \{\} \; | \
                sed -e s/_la_LDFLAGS.*// -e s/^lib//)
-+for lib in $libs; do
-+    if [ -z "${headers[$lib]}" ]; then
-+	echo "Unknown headers for lib$lib"
-+	exit 0
-+    fi
-     git diff -w $LAST_RELEASE..HEAD ${headers[$lib]}
-     echo ""
-
-@@ -27,6 +37,7 @@ for lib in crmcommon crmcluster transitioner cib pe_rules \
                pe_status stonithd pen
-     fi
-
-     sources=`grep "lib${lib}_la_SOURCES" $am | sed s/.*=// | sed \
's:$(top_builddir)/::' | sed 's:$(top_srcdir)/::' | sed 's:\\\::' | sed \
                's:$(libpe_rules_la_SOURCES):rules.c\ common.c:'`
-+
-     full_sources=""
-     for f in $sources; do
- 	if
-@@ -48,6 +59,11 @@ for lib in crmcommon crmcluster transitioner cib pe_rules \
                pe_status stonithd pen
- 	echo ""
- 	echo "New arguments to functions or changes to the middle of structs are \
                incompatible additions"
- 	echo ""
-+	echo "Where possible:"
-+	echo "- move new fields to the end of structs"
-+	echo "- use bitfields instead of booleans"
-+	echo "- when adding arguments, create new functions that the old version can call"
-+	echo ""
- 	read -p "Are the changes to lib$lib: [a]dditions, [i]ncompatible additions, \
                [r]emovals or [f]ixes? [None]: " CHANGE
-
- 	git show $LAST_RELEASE:$am | grep version-info
-diff --git a/cib/callbacks.c b/cib/callbacks.c
-index 1452ded..28844b8 100644
---- a/cib/callbacks.c
-+++ b/cib/callbacks.c
-@@ -1570,7 +1570,7 @@ static gboolean
- cib_force_exit(gpointer data)
- {
-     crm_notice("Forcing exit!");
--    terminate_cib(__FUNCTION__, TRUE);
-+    terminate_cib(__FUNCTION__, -1);
-     return FALSE;
- }
-
-@@ -1656,7 +1656,7 @@ initiate_exit(void)
-
-     active = crm_active_peers();
-     if (active < 2) {
--        terminate_cib(__FUNCTION__, FALSE);
-+        terminate_cib(__FUNCTION__, 0);
-         return;
-     }
-
-@@ -1675,9 +1675,19 @@ initiate_exit(void)
- extern int remote_fd;
- extern int remote_tls_fd;
-
-+/*
-+ * \internal
-+ * \brief Close remote sockets, free the global CIB and quit
-+ *
-+ * \param[in] caller           Name of calling function (for log message)
-+ * \param[in] fast             If 1, skip disconnect; if -1, also exit error
-+ */
- void
--terminate_cib(const char *caller, gboolean fast)
-+terminate_cib(const char *caller, int fast)
- {
-+    crm_info("%s: Exiting%s...", caller,
-+             (fast < 0)? " fast" : mainloop ? " from mainloop" : "");
-+
-     if (remote_fd > 0) {
-         close(remote_fd);
-         remote_fd = 0;
-@@ -1687,27 +1697,29 @@ terminate_cib(const char *caller, gboolean fast)
-         remote_tls_fd = 0;
-     }
-
--    if (!fast) {
--        crm_info("%s: Disconnecting from cluster infrastructure", caller);
--        crm_cluster_disconnect(&crm_cluster);
--    }
--
-     uninitializeCib();
-
--    crm_info("%s: Exiting%s...", caller, fast ? " fast" : mainloop ? " from \
                mainloop" : "");
-+    if (fast < 0) {
-+        /* Quit fast on error */
-+        cib_ipc_servers_destroy(ipcs_ro, ipcs_rw, ipcs_shm);
-+        crm_exit(EINVAL);
-
--    if (fast == FALSE && mainloop != NULL && g_main_is_running(mainloop)) {
-+    } else if ((mainloop != NULL) && g_main_is_running(mainloop)) {
-+        /* Quit via returning from the main loop. If fast == 1, we skip the
-+         * disconnect here, and it will be done when the main loop returns
-+         * (this allows the peer status callback to avoid messing with the
-+         * peer caches).
-+         */
-+        if (fast == 0) {
-+            crm_cluster_disconnect(&crm_cluster);
-+        }
-         g_main_quit(mainloop);
-
-     } else {
--        qb_ipcs_destroy(ipcs_ro);
--        qb_ipcs_destroy(ipcs_rw);
--        qb_ipcs_destroy(ipcs_shm);
--
--        if (fast) {
--            crm_exit(EINVAL);
--        } else {
--            crm_exit(pcmk_ok);
--        }
-+        /* Quit via clean exit. Even the peer status callback can disconnect
-+         * here, because we're not returning control to the caller. */
-+        crm_cluster_disconnect(&crm_cluster);
-+        cib_ipc_servers_destroy(ipcs_ro, ipcs_rw, ipcs_shm);
-+        crm_exit(pcmk_ok);
-     }
- }
-diff --git a/cib/callbacks.h b/cib/callbacks.h
-index bca9992..a49428e 100644
---- a/cib/callbacks.h
-+++ b/cib/callbacks.h
-@@ -71,7 +71,7 @@ extern void cib_common_callback_worker(uint32_t id, uint32_t \
                flags, xmlNode * op
-
- void cib_shutdown(int nsig);
- void initiate_exit(void);
--void terminate_cib(const char *caller, gboolean fast);
-+void terminate_cib(const char *caller, int fast);
-
- extern gboolean cib_legacy_mode(void);
-
-diff --git a/cib/main.c b/cib/main.c
-index e20a2b6..cbaf7b5 100644
---- a/cib/main.c
-+++ b/cib/main.c
-@@ -71,8 +71,6 @@ gboolean cib_register_ha(ll_cluster_t * hb_cluster, const char \
                *client_name);
- void *hb_conn = NULL;
- #endif
-
--extern void terminate_cib(const char *caller, gboolean fast);
--
- GMainLoop *mainloop = NULL;
- const char *cib_root = NULL;
- char *cib_our_uname = NULL;
-@@ -414,7 +412,7 @@ cib_cs_destroy(gpointer user_data)
-         crm_info("Corosync disconnection complete");
-     } else {
-         crm_err("Corosync connection lost!  Exiting.");
--        terminate_cib(__FUNCTION__, TRUE);
-+        terminate_cib(__FUNCTION__, -1);
-     }
- }
- #endif
-@@ -422,30 +420,29 @@ cib_cs_destroy(gpointer user_data)
- static void
- cib_peer_update_callback(enum crm_status_type type, crm_node_t * node, const void \
                *data)
- {
--    if ((type == crm_status_processes) && legacy_mode
--        && is_not_set(node->processes, crm_get_cluster_proc())) {
--        uint32_t old = 0;
--
--        if (data) {
--            old = *(const uint32_t *)data;
--        }
-+    switch (type) {
-+        case crm_status_processes:
-+            if (legacy_mode && is_not_set(node->processes, crm_get_cluster_proc())) \
                {
-+                uint32_t old = data? *(const uint32_t *)data : 0;
-+
-+                if ((node->processes ^ old) & crm_proc_cpg) {
-+                    crm_info("Attempting to disable legacy mode after %s left the \
                cluster",
-+                             node->uname);
-+                    legacy_mode = FALSE;
-+                }
-+            }
-+            break;
-
--        if ((node->processes ^ old) & crm_proc_cpg) {
--            crm_info("Attempting to disable legacy mode after %s left the cluster", \
                node->uname);
--            legacy_mode = FALSE;
--        }
--    }
-+        case crm_status_uname:
-+        case crm_status_rstate:
-+        case crm_status_nstate:
-+            if (cib_shutdown_flag && (crm_active_peers() < 2)
-+                && crm_hash_table_size(client_connections) == 0) {
-
--    if (cib_shutdown_flag && crm_active_peers() < 2 && \
                crm_hash_table_size(client_connections) == 0) {
--        crm_info("No more peers");
--        /* @TODO
--         * terminate_cib() calls crm_cluster_disconnect() which calls
--         * crm_peer_destroy() which destroys the peer caches, which a peer
--         * status callback shouldn't do. For now, there is a workaround in
--         * crm_update_peer_proc(), but CIB should be refactored to avoid
--         * destroying the peer caches here.
--         */
--        terminate_cib(__FUNCTION__, FALSE);
-+                crm_info("No more peers");
-+                terminate_cib(__FUNCTION__, 1);
-+            }
-+            break;
-     }
- }
-
-@@ -455,10 +452,10 @@ cib_ha_connection_destroy(gpointer user_data)
- {
-     if (cib_shutdown_flag) {
-         crm_info("Heartbeat disconnection complete... exiting");
--        terminate_cib(__FUNCTION__, FALSE);
-+        terminate_cib(__FUNCTION__, 0);
-     } else {
-         crm_err("Heartbeat connection lost!  Exiting.");
--        terminate_cib(__FUNCTION__, TRUE);
-+        terminate_cib(__FUNCTION__, -1);
-     }
- }
- #endif
-@@ -541,8 +538,12 @@ cib_init(void)
-     /* Create the mainloop and run it... */
-     mainloop = g_main_new(FALSE);
-     crm_info("Starting %s mainloop", crm_system_name);
--
-     g_main_run(mainloop);
-+
-+    /* If main loop returned, clean up and exit. We disconnect in case
-+     * terminate_cib() was called with fast=1.
-+     */
-+    crm_cluster_disconnect(&crm_cluster);
-     cib_ipc_servers_destroy(ipcs_ro, ipcs_rw, ipcs_shm);
-
-     return crm_exit(pcmk_ok);
-diff --git a/cib/messages.c b/cib/messages.c
-index 363562c..eca63b9 100644
---- a/cib/messages.c
-+++ b/cib/messages.c
-@@ -87,7 +87,7 @@ cib_process_shutdown_req(const char *op, int options, const char \
                *section, xmlNo
-
-     } else if (cib_shutdown_flag) {
-         crm_info("Shutdown ACK from %s", host);
--        terminate_cib(__FUNCTION__, FALSE);
-+        terminate_cib(__FUNCTION__, 0);
-         return pcmk_ok;
-
-     } else {
-diff --git a/crmd/crmd_utils.h b/crmd/crmd_utils.h
-index 78ccad2..78214bf 100644
---- a/crmd/crmd_utils.h
-+++ b/crmd/crmd_utils.h
-@@ -102,11 +102,14 @@ gboolean too_many_st_failures(void);
- void st_fail_count_reset(const char * target);
- void crmd_peer_down(crm_node_t *peer, bool full);
-
-+/* Convenience macro for registering a CIB callback
-+ * (assumes that data can be freed with free())
-+ */
- #  define fsa_register_cib_callback(id, flag, data, fn) do {            \
-     CRM_ASSERT(fsa_cib_conn);                                           \
--    fsa_cib_conn->cmds->register_callback(                              \
-+    fsa_cib_conn->cmds->register_callback_full(                         \
-             fsa_cib_conn, id, 10 * (1 + crm_active_peers()),            \
--            flag, data, #fn, fn);                                       \
-+            flag, data, #fn, fn, free);                                 \
-     } while(0)
-
- #  define start_transition(state) do {					\
-diff --git a/crmd/join_client.c b/crmd/join_client.c
-index 286cd92..65e3bed 100644
---- a/crmd/join_client.c
-+++ b/crmd/join_client.c
-@@ -116,8 +116,8 @@ do_cl_join_offer_respond(long long action,
-
-     /* we only ever want the last one */
-     if (query_call_id > 0) {
--        /* Calling remove_cib_op_callback() would result in a memory leak of the \
                data field */
-         crm_trace("Cancelling previous join query: %d", query_call_id);
-+        remove_cib_op_callback(query_call_id, FALSE);
-         query_call_id = 0;
-     }
-
-@@ -173,7 +173,6 @@ join_query_callback(xmlNode * msg, int call_id, int rc, xmlNode \
                * output, void *
-
-   done:
-     free_xml(generation);
--    free(join_id);
- }
-
- /*	A_CL_JOIN_RESULT	*/
-diff --git a/crmd/join_dc.c b/crmd/join_dc.c
-index f777296..5280b6e 100644
---- a/crmd/join_dc.c
-+++ b/crmd/join_dc.c
-@@ -452,8 +452,6 @@ finalize_sync_callback(xmlNode * msg, int call_id, int rc, \
                xmlNode * output, voi
-         crm_debug("No longer the DC in S_FINALIZE_JOIN: %s/%s",
-                   AM_I_DC ? "DC" : "CRMd", fsa_state2string(fsa_state));
-     }
--
--    free(user_data);
- }
-
- static void
-diff --git a/crmd/lrm_state.c b/crmd/lrm_state.c
-index 162ad03..c03fa0b 100644
---- a/crmd/lrm_state.c
-+++ b/crmd/lrm_state.c
-@@ -490,7 +490,7 @@ remote_proxy_cb(lrmd_t *lrmd, void *userdata, xmlNode *msg)
-         if (remote_proxy_new(lrm_state->node_name, session, channel) == NULL) {
-             remote_proxy_notify_destroy(lrmd, session);
-         }
--        crm_info("new remote proxy client established to %s, session id %s", \
                channel, session);
-+        crm_trace("new remote proxy client established to %s, session id %s", \
                channel, session);
-     } else if (safe_str_eq(op, "destroy")) {
-         remote_proxy_end_session(session);
-
-@@ -534,7 +534,16 @@ remote_proxy_cb(lrmd_t *lrmd, void *userdata, xmlNode *msg)
-             }
-
-         } else if(is_set(flags, crm_ipc_proxied)) {
--            int rc = crm_ipc_send(proxy->ipc, request, flags, 5000, NULL);
-+            const char *type = crm_element_value(request, F_TYPE);
-+            int rc = 0;
-+
-+            if (safe_str_eq(type, T_ATTRD)
-+                && crm_element_value(request, F_ATTRD_HOST) == NULL) {
-+                crm_xml_add(request, F_ATTRD_HOST, proxy->node_name);
-+                crm_xml_add_int(request, F_ATTRD_HOST_ID, get_local_nodeid(0));
-+            }
-+
-+            rc = crm_ipc_send(proxy->ipc, request, flags, 5000, NULL);
-
-             if(rc < 0) {
-                 xmlNode *op_reply = create_xml_node(NULL, "nack");
-diff --git a/crmd/membership.c b/crmd/membership.c
-index 447e6a8..27ae710 100644
---- a/crmd/membership.c
-+++ b/crmd/membership.c
-@@ -200,7 +200,6 @@ remove_conflicting_node_callback(xmlNode * msg, int call_id, int \
                rc,
-     do_crm_log_unlikely(rc == 0 ? LOG_DEBUG : LOG_NOTICE,
-                         "Deletion of the unknown conflicting node \"%s\": %s \
                (rc=%d)",
-                         node_uuid, pcmk_strerror(rc), rc);
--    free(node_uuid);
- }
-
- static void
-@@ -215,11 +214,9 @@ search_conflicting_node_callback(xmlNode * msg, int call_id, \
                int rc,
-             crm_notice("Searching conflicting nodes for %s failed: %s (%d)",
-                        new_node_uuid, pcmk_strerror(rc), rc);
-         }
--        free(new_node_uuid);
-         return;
-
-     } else if (output == NULL) {
--        free(new_node_uuid);
-         return;
-     }
-
-@@ -283,8 +280,6 @@ search_conflicting_node_callback(xmlNode * msg, int call_id, int \
                rc,
-             free_xml(node_state_xml);
-         }
-     }
--
--    free(new_node_uuid);
- }
-
- static void
-diff --git a/crmd/pengine.c b/crmd/pengine.c
-index c9544a9..46df648 100644
---- a/crmd/pengine.c
-+++ b/crmd/pengine.c
-@@ -77,8 +77,6 @@ save_cib_contents(xmlNode * msg, int call_id, int rc, xmlNode * \
                output, void *us
-
-         free(filename);
-     }
--
--    free(id);
- }
-
- static void
-@@ -320,9 +318,10 @@ do_pe_invoke_callback(xmlNode * msg, int call_id, int rc, \
                xmlNode * output, void
-         crm_debug("Discarding PE request in state: %s", \
                fsa_state2string(fsa_state));
-         return;
-
--    } else if (num_cib_op_callbacks() != 0) {
--        crm_debug("Re-asking for the CIB: %d peer updates still pending", \
                num_cib_op_callbacks());
--
-+    /* this callback counts as 1 */
-+    } else if (num_cib_op_callbacks() > 1) {
-+        crm_debug("Re-asking for the CIB: %d other peer updates still pending",
-+                  (num_cib_op_callbacks() - 1));
-         sleep(1);
-         register_fsa_action(A_PE_INVOKE);
-         return;
-diff --git a/crmd/te_callbacks.c b/crmd/te_callbacks.c
-index 68742c2..c22b273 100644
---- a/crmd/te_callbacks.c
-+++ b/crmd/te_callbacks.c
-@@ -294,6 +294,49 @@ static char *get_node_from_xpath(const char *xpath)
-     return nodeid;
- }
-
-+static char *extract_node_uuid(const char *xpath)
-+{
-+    char *mutable_path = strdup(xpath);
-+    char *node_uuid = NULL;
-+    char *search = NULL;
-+    char *match = NULL;
-+
-+    match = strstr(mutable_path, "node_state[@id=\'") + \
                strlen("node_state[@id=\'");
-+    search = strchr(match, '\'');
-+    search[0] = 0;
-+
-+    node_uuid = strdup(match);
-+    free(mutable_path);
-+    return node_uuid;
-+}
-+
-+static void abort_unless_down(const char *xpath, const char *op, xmlNode *change, \
                const char *reason)
-+{
-+    char *node_uuid = NULL;
-+    crm_action_t *down = NULL;
-+
-+    if(safe_str_neq(op, "delete")) {
-+        abort_transition(INFINITY, tg_restart, reason, change);
-+        return;
-+    }
-+
-+    node_uuid = extract_node_uuid(xpath);
-+    if(node_uuid == NULL) {
-+        crm_err("Could not extract node ID from %s", xpath);
-+        abort_transition(INFINITY, tg_restart, reason, change);
-+        return;
-+    }
-+
-+    down = match_down_event(0, node_uuid, NULL, FALSE);
-+    if(down == NULL || down->executed == false) {
-+        crm_trace("Not expecting %s to be down (%s)", node_uuid, xpath);
-+        abort_transition(INFINITY, tg_restart, reason, change);
-+    } else {
-+        crm_trace("Expecting changes to %s (%s)", node_uuid, xpath);
-+    }
-+    free(node_uuid);
-+}
-+
- void
- te_update_diff(const char *event, xmlNode * msg)
- {
-@@ -388,27 +431,22 @@ te_update_diff(const char *event, xmlNode * msg)
-             break; /* Wont be packaged with any resource operations we may be \
                waiting for */
-
-         } else if(strstr(xpath, "/"XML_TAG_TRANSIENT_NODEATTRS"[") || \
                safe_str_eq(name, XML_TAG_TRANSIENT_NODEATTRS)) {
--            abort_transition(INFINITY, tg_restart, "Transient attribute change", \
                change);
-+            abort_unless_down(xpath, op, change, "Transient attribute change");
-             break; /* Wont be packaged with any resource operations we may be \
                waiting for */
-
-         } else if(strstr(xpath, "/"XML_LRM_TAG_RSC_OP"[") && safe_str_eq(op, \
                "delete")) {
-             crm_action_t *cancel = NULL;
-             char *mutable_key = strdup(xpath);
--            char *mutable_node = strdup(xpath);
-             char *search = NULL;
-
-             const char *key = NULL;
--            const char *node_uuid = NULL;
-+            char *node_uuid = extract_node_uuid(xpath);
-
-             search = strrchr(mutable_key, '\'');
-             search[0] = 0;
-
-             key = strrchr(mutable_key, '\'') + 1;
-
--            node_uuid = strstr(mutable_node, "node_state[@id=\'") + \
                strlen("node_state[@id=\'");
--            search = strchr(node_uuid, '\'');
--            search[0] = 0;
--
-             cancel = get_cancel_action(key, node_uuid);
-             if (cancel == NULL) {
-                 abort_transition(INFINITY, tg_restart, "Resource operation \
                removal", change);
-@@ -422,14 +460,14 @@ te_update_diff(const char *event, xmlNode * msg)
-                 trigger_graph();
-
-             }
--            free(mutable_node);
-             free(mutable_key);
-+            free(node_uuid);
-
-         } else if(strstr(xpath, "/"XML_CIB_TAG_LRM"[") && safe_str_eq(op, \
                "delete")) {
--            abort_transition(INFINITY, tg_restart, "Resource state removal", \
                change);
-+            abort_unless_down(xpath, op, change, "Resource state removal");
-
-         } else if(strstr(xpath, "/"XML_CIB_TAG_STATE"[") && safe_str_eq(op, \
                "delete")) {
--            abort_transition(INFINITY, tg_restart, "Node state removal", change);
-+            abort_unless_down(xpath, op, change, "Node state removal");
-
-         } else if(name == NULL) {
-             crm_debug("No result for %s operation to %s", op, xpath);
-@@ -717,7 +755,6 @@ cib_fencing_updated(xmlNode * msg, int call_id, int rc, xmlNode \
                * output, void *
-     } else {
-         crm_info("Fencing update %d for %s: complete", call_id, (char *)user_data);
-     }
--    free(user_data);
- }
-
- void
-diff --git a/crmd/utils.c b/crmd/utils.c
-index 5ca4b9d..4fe3a49 100644
---- a/crmd/utils.c
-+++ b/crmd/utils.c
-@@ -999,7 +999,6 @@ erase_xpath_callback(xmlNode * msg, int call_id, int rc, xmlNode \
                * output, void
-
-     do_crm_log_unlikely(rc == 0 ? LOG_DEBUG : LOG_NOTICE,
-                         "Deletion of \"%s\": %s (rc=%d)", xpath, pcmk_strerror(rc), \
                rc);
--    free(xpath);
- }
-
- void
-diff --git a/cts/CIB.py b/cts/CIB.py
-index 82d02d7..8fbba6c 100644
---- a/cts/CIB.py
-+++ b/cts/CIB.py
-@@ -105,7 +105,7 @@ class CIB11(ConfigBase):
-             if not name:
-                 name = "r%s%d" % (self.CM.Env["IPagent"], self.counter)
-                 self.counter = self.counter + 1
--	    r = Resource(self.Factory, name, self.CM.Env["IPagent"], standard)
-+            r = Resource(self.Factory, name, self.CM.Env["IPagent"], standard)
-
-         r.add_op("monitor", "5s")
-         return r
-@@ -387,7 +387,7 @@ class ConfigFactory:
-         """register a constructor"""
-         _args = [constructor]
-         _args.extend(args)
--        setattr(self, methodName, apply(ConfigFactoryItem,_args, kargs))
-+        setattr(self, methodName, ConfigFactoryItem(*_args, **kargs))
-
-     def unregister(self, methodName):
-         """unregister a constructor"""
-@@ -415,7 +415,6 @@ class ConfigFactory:
-
- class ConfigFactoryItem:
-     def __init__(self, function, *args, **kargs):
--        assert callable(function), "function should be a callable obj"
-         self._function = function
-         self._args = args
-         self._kargs = kargs
-@@ -426,7 +425,7 @@ class ConfigFactoryItem:
-         _args.extend(args)
-         _kargs = self._kargs.copy()
-         _kargs.update(kargs)
--        return apply(self._function,_args,_kargs)
-+        return self._function(*_args,**_kargs)
-
- # Basic Sanity Testing
- if __name__ == '__main__':
-@@ -449,4 +448,4 @@ if __name__ == '__main__':
-
-     CibFactory = ConfigFactory(manager)
-     cib = CibFactory.createConfig("pacemaker-1.1")
--    print cib.contents()
-+    print(cib.contents())
-diff --git a/cts/CM_ais.py b/cts/CM_ais.py
-index a34f9b1..d2e2c1f 100644
---- a/cts/CM_ais.py
-+++ b/cts/CM_ais.py
-@@ -80,7 +80,7 @@ class crm_ais(crm_lha):
-         # Processes running under valgrind can't be shot with "killall -9 \
                processname",
-         # so don't include them in the returned list
-         vgrind = self.Env["valgrind-procs"].split()
--        for key in self.fullcomplist.keys():
-+        for key in list(self.fullcomplist.keys()):
-             if self.Env["valgrind-tests"]:
-                 if key in vgrind:
-                     self.log("Filtering %s from the component list as it is being \
                profiled by valgrind" % key)
-diff --git a/cts/CM_lha.py b/cts/CM_lha.py
-index b192272..28742d9 100755
---- a/cts/CM_lha.py
-+++ b/cts/CM_lha.py
-@@ -92,7 +92,7 @@ class crm_lha(ClusterManager):
-             self.log("Node %s is not up." % node)
-             return None
-
--        if not self.CIBsync.has_key(node) and self.Env["ClobberCIB"] == 1:
-+        if not node in self.CIBsync and self.Env["ClobberCIB"] == 1:
-             self.CIBsync[node] = 1
-             self.rsh(node, "rm -f "+CTSvars.CRM_CONFIG_DIR+"/cib*")
-
-diff --git a/cts/CTS.py b/cts/CTS.py
-index 9f9a291..634348a 100644
---- a/cts/CTS.py
-+++ b/cts/CTS.py
-@@ -69,7 +69,7 @@ function status() {
- function start() {
-     # Is it already running?
-     if
--	status
-+        status
-     then
-         return
-     fi
-@@ -94,20 +94,20 @@ case $action in
-         nohup $0 $f start >/dev/null 2>&1 </dev/null &
-         ;;
-     stop)
--	killpid
--	;;
-+        killpid
-+        ;;
-     delete)
--	killpid
--	rm -f $f
--	;;
-+        killpid
-+        rm -f $f
-+        ;;
-     mark)
--	uptime | sed s/up.*:/,/ | tr '\\n' ',' >> $f
--	echo " $*" >> $f
-+        uptime | sed s/up.*:/,/ | tr '\\n' ',' >> $f
-+        echo " $*" >> $f
-         start
--	;;
-+        ;;
-     *)
--	echo "Unknown action: $action."
--	;;
-+        echo "Unknown action: $action."
-+        ;;
- esac
- """
-
-@@ -157,7 +157,7 @@ class CtsLab:
-         self.Env.dump()
-
-     def has_key(self, key):
--        return self.Env.has_key(key)
-+        return key in self.Env.keys()
-
-     def __getitem__(self, key):
-         return self.Env[key]
-@@ -275,7 +275,7 @@ class ClusterManager(UserDict):
-         None
-
-     def _finalConditions(self):
--        for key in self.keys():
-+        for key in list(self.keys()):
-             if self[key] == None:
-                 raise ValueError("Improper derivation: self[" + key +   "] must be \
                overridden by subclass.")
-
-@@ -299,14 +299,14 @@ class ClusterManager(UserDict):
-         if key == "Name":
-             return self.name
-
--        print "FIXME: Getting %s from %s" % (key, repr(self))
--        if self.data.has_key(key):
-+        print("FIXME: Getting %s from %s" % (key, repr(self)))
-+        if key in self.data:
-             return self.data[key]
-
-         return self.templates.get_patterns(self.Env["Name"], key)
-
-     def __setitem__(self, key, value):
--        print "FIXME: Setting %s=%s on %s" % (key, value, repr(self))
-+        print("FIXME: Setting %s=%s on %s" % (key, value, repr(self)))
-         self.data[key] = value
-
-     def key_for_node(self, node):
-@@ -333,7 +333,7 @@ class ClusterManager(UserDict):
-     def prepare(self):
-         '''Finish the Initialization process. Prepare to test...'''
-
--        print repr(self)+"prepare"
-+        print(repr(self)+"prepare")
-         for node in self.Env["nodes"]:
-             if self.StataCM(node):
-                 self.ShouldBeStatus[node] = "up"
-@@ -387,11 +387,11 @@ class ClusterManager(UserDict):
-             return None
-
-         if not self.templates["Pat:Fencing_start"]:
--            print "No start pattern"
-+            print("No start pattern")
-             return None
-
-         if not self.templates["Pat:Fencing_ok"]:
--            print "No ok pattern"
-+            print("No ok pattern")
-             return None
-
-         stonith = None
-@@ -500,7 +500,7 @@ class ClusterManager(UserDict):
-         else: self.debug("Starting %s on node %s" % (self.templates["Name"], node))
-         ret = 1
-
--        if not self.ShouldBeStatus.has_key(node):
-+        if not node in self.ShouldBeStatus:
-             self.ShouldBeStatus[node] = "down"
-
-         if self.ShouldBeStatus[node] != "down":
-@@ -871,13 +871,13 @@ class ClusterManager(UserDict):
-
-         for host in self.Env["nodes"]:
-             log_stats_file = "%s/cts-stats.csv" % CTSvars.CRM_DAEMON_DIR
--            if has_log_stats.has_key(host):
-+            if host in has_log_stats:
-                 self.rsh(host, '''bash %s %s stop''' % (log_stats_bin, \
                log_stats_file))
-                 (rc, lines) = self.rsh(host, '''cat %s''' % log_stats_file, \
                stdout=2)
-                 self.rsh(host, '''bash %s %s delete''' % (log_stats_bin, \
                log_stats_file))
-
-                 fname = "cts-stats-%d-nodes-%s.csv" % (len(self.Env["nodes"]), \
                host)
--                print "Extracted stats: %s" % fname
-+                print("Extracted stats: %s" % fname)
-                 fd = open(fname, "a")
-                 fd.writelines(lines)
-                 fd.close()
-@@ -891,7 +891,7 @@ class ClusterManager(UserDict):
-
-         for host in self.Env["nodes"]:
-             log_stats_file = "%s/cts-stats.csv" % CTSvars.CRM_DAEMON_DIR
--            if not has_log_stats.has_key(host):
-+            if not host in has_log_stats:
-
-                 global log_stats
-                 global log_stats_bin
-@@ -986,7 +986,7 @@ class Process(Component):
-         self.CM = cm
-         self.badnews_ignore = badnews_ignore
-         self.badnews_ignore.extend(common_ignore)
--	self.triggersreboot = triggersreboot
-+        self.triggersreboot = triggersreboot
-
-         if process:
-             self.proc = str(process)
-diff --git a/cts/CTSaudits.py b/cts/CTSaudits.py
-index 8d52062..e8663f2 100755
---- a/cts/CTSaudits.py
-+++ b/cts/CTSaudits.py
-@@ -108,7 +108,7 @@ class LogAudit(ClusterAudit):
-                 self.CM.log ("ERROR: Cannot execute remote command [%s] on %s" % \
                (cmd, node))
-
-         for k in self.kinds:
--            if watch.has_key(k):
-+            if k in watch:
-                 w = watch[k]
-                 if watch_pref == "any": self.CM.log("Testing for %s logs" % (k))
-                 w.lookforall(silent=True)
-@@ -118,7 +118,7 @@ class LogAudit(ClusterAudit):
-                         self.CM.Env["LogWatcher"] = w.kind
-                     return 1
-
--        for k in watch.keys():
-+        for k in list(watch.keys()):
-             w = watch[k]
-             if w.unmatched:
-                 for regex in w.unmatched:
-@@ -226,7 +226,7 @@ class FileAudit(ClusterAudit):
-                     self.known.append(line)
-                     self.CM.log("Warning: Corosync core file on %s: %s" % (node, \
                line))
-
--            if self.CM.ShouldBeStatus.has_key(node) and \
                self.CM.ShouldBeStatus[node] == "down":
-+            if node in self.CM.ShouldBeStatus and self.CM.ShouldBeStatus[node] == \
                "down":
-                 clean = 0
-                 (rc, lsout) = self.CM.rsh(node, "ls -al /dev/shm | grep qb-", None)
-                 for line in lsout:
-@@ -532,7 +532,7 @@ class CrmdStateAudit(ClusterAudit):
-         ,        "auditfail":0}
-
-     def has_key(self, key):
--        return self.Stats.has_key(key)
-+        return key in self.Stats
-
-     def __setitem__(self, key, value):
-         self.Stats[key] = value
-@@ -542,7 +542,7 @@ class CrmdStateAudit(ClusterAudit):
-
-     def incr(self, name):
-         '''Increment (or initialize) the value associated with the given name'''
--        if not self.Stats.has_key(name):
-+        if not name in self.Stats:
-             self.Stats[name] = 0
-         self.Stats[name] = self.Stats[name]+1
-
-@@ -601,7 +601,7 @@ class CIBAudit(ClusterAudit):
-         ,        "auditfail":0}
-
-     def has_key(self, key):
--        return self.Stats.has_key(key)
-+        return key in self.Stats
-
-     def __setitem__(self, key, value):
-         self.Stats[key] = value
-@@ -611,7 +611,7 @@ class CIBAudit(ClusterAudit):
-
-     def incr(self, name):
-         '''Increment (or initialize) the value associated with the given name'''
--        if not self.Stats.has_key(name):
-+        if not name in self.Stats:
-             self.Stats[name] = 0
-         self.Stats[name] = self.Stats[name]+1
-
-@@ -726,7 +726,7 @@ class PartitionAudit(ClusterAudit):
-
-     def incr(self, name):
-         '''Increment (or initialize) the value associated with the given name'''
--        if not self.Stats.has_key(name):
-+        if not name in self.Stats:
-             self.Stats[name] = 0
-         self.Stats[name] = self.Stats[name]+1
-
-diff --git a/cts/CTSscenarios.py b/cts/CTSscenarios.py
-index 2f3a69b..cc6e67e 100644
---- a/cts/CTSscenarios.py
-+++ b/cts/CTSscenarios.py
-@@ -124,7 +124,7 @@ A partially set up scenario is torn down if it fails during \
                setup.
-
-     def incr(self, name):
-         '''Increment (or initialize) the value associated with the given name'''
--        if not self.Stats.has_key(name):
-+        if not name in self.Stats:
-             self.Stats[name] = 0
-         self.Stats[name] = self.Stats[name]+1
-
-@@ -176,7 +176,7 @@ A partially set up scenario is torn down if it fails during \
                setup.
-
-         elapsed_time = stoptime - starttime
-         test_time = stoptime - test.get_timer()
--        if not test.has_key("min_time"):
-+        if not test["min_time"]:
-             test["elapsed_time"] = elapsed_time
-             test["min_time"] = test_time
-             test["max_time"] = test_time
-@@ -211,7 +211,7 @@ A partially set up scenario is torn down if it fails during \
                setup.
-             }
-         self.ClusterManager.log("Test Summary")
-         for test in self.Tests:
--            for key in stat_filter.keys():
-+            for key in list(stat_filter.keys()):
-                 stat_filter[key] = test.Stats[key]
-             self.ClusterManager.log(("Test %s: "%test.name).ljust(25) + " \
                %s"%repr(stat_filter))
-
-@@ -387,7 +387,7 @@ According to the manual page for ping:
-         '''Start the PingFest!'''
-
-         self.PingSize = 1024
--        if CM.Env.has_key("PingSize"):
-+        if "PingSize" in CM.Env.keys():
-                 self.PingSize = CM.Env["PingSize"]
-
-         CM.log("Starting %d byte flood pings" % self.PingSize)
-@@ -550,7 +550,7 @@ Test a rolling upgrade between two versions of the stack
-         return self.install(node, self.CM.Env["previous-version"])
-
-     def SetUp(self, CM):
--        print repr(self)+"prepare"
-+        print(repr(self)+"prepare")
-         CM.prepare()
-
-         # Clear out the cobwebs
-diff --git a/cts/CTStests.py b/cts/CTStests.py
-index f817004..00fcd13 100644
---- a/cts/CTStests.py
-+++ b/cts/CTStests.py
-@@ -97,13 +97,18 @@ class CTSTest:
-         self.logger.debug(args)
-
-     def has_key(self, key):
--        return self.Stats.has_key(key)
-+        return key in self.Stats
-
-     def __setitem__(self, key, value):
-         self.Stats[key] = value
-
-     def __getitem__(self, key):
--        return self.Stats[key]
-+        if str(key) == "0":
-+            raise ValueError("Bad call to 'foo in X', should reference 'foo in \
                X.Stats' instead")
-+
-+        if key in self.Stats:
-+            return self.Stats[key]
-+        return None
-
-     def log_mark(self, msg):
-         self.debug("MARK: test %s %s %d" % (self.name,msg,time.time()))
-@@ -128,7 +133,7 @@ class CTSTest:
-
-     def incr(self, name):
-         '''Increment (or initialize) the value associated with the given name'''
--        if not self.Stats.has_key(name):
-+        if not name in self.Stats:
-             self.Stats[name] = 0
-         self.Stats[name] = self.Stats[name]+1
-
-@@ -534,7 +539,7 @@ class StonithdTest(CTSTest):
-         if not self.is_applicable_common():
-             return 0
-
--        if self.Env.has_key("DoFencing"):
-+        if "DoFencing" in self.Env.keys():
-             return self.Env["DoFencing"]
-
-         return 1
-@@ -1048,7 +1053,7 @@ class BandwidthTest(CTSTest):
-                 T1 = linesplit[0]
-                 timesplit = string.split(T1,":")
-                 time2split = string.split(timesplit[2],".")
--                time1 = \
(long(timesplit[0])*60+long(timesplit[1]))*60+long(time2split[0])+long(time2split[1])*0.000001
                
-+                time1 = \
(int(timesplit[0])*60+int(timesplit[1]))*60+int(time2split[0])+int(time2split[1])*0.000001
                
-                 break
-
-         while count < 100:
-@@ -1070,7 +1075,7 @@ class BandwidthTest(CTSTest):
-         T2 = linessplit[0]
-         timesplit = string.split(T2,":")
-         time2split = string.split(timesplit[2],".")
--        time2 = (long(timesplit[0])*60+long(timesplit[1]))*60+long(time2split[0])+long(time2split[1])*0.000001
                
-+        time2 = (int(timesplit[0])*60+int(timesplit[1]))*60+int(time2split[0])+int(time2split[1])*0.000001
                
-         time = time2-time1
-         if (time <= 0):
-             return 0
-@@ -1105,7 +1110,7 @@ class MaintenanceMode(CTSTest):
-         # fail the resource right after turning Maintenance mode on
-         # verify it is not recovered until maintenance mode is turned off
-         if action == "On":
--            pats.append("pengine.*: warning:.* Processing failed op %s for %s on" % \
                (self.action, self.rid))
-+            pats.append(r"pengine.*:\s+warning:.*Processing failed op %s for %s on" \
                % (self.action, self.rid))
-         else:
-             pats.append(self.templates["Pat:RscOpOK"] % (self.rid, "stop_0"))
-             pats.append(self.templates["Pat:RscOpOK"] % (self.rid, "start_0"))
-@@ -1314,7 +1319,7 @@ class ResourceRecover(CTSTest):
-         self.debug("Shooting %s aka. %s" % (rsc.clone_id, rsc.id))
-
-         pats = []
--        pats.append(r"pengine.*: warning:.* Processing failed op %s for (%s|%s) on" \
                % (self.action,
-+        pats.append(r"pengine.*:\s+warning:.*Processing failed op %s for (%s|%s) \
                on" % (self.action,
-             rsc.id, rsc.clone_id))
-
-         if rsc.managed():
-@@ -1574,7 +1579,7 @@ class SplitBrainTest(CTSTest):
-             p_max = len(self.Env["nodes"])
-             for node in self.Env["nodes"]:
-                 p = self.Env.RandomGen.randint(1, p_max)
--                if not partitions.has_key(p):
-+                if not p in partitions:
-                     partitions[p] = []
-                 partitions[p].append(node)
-             p_max = len(partitions.keys())
-@@ -1583,13 +1588,13 @@ class SplitBrainTest(CTSTest):
-             # else, try again
-
-         self.debug("Created %d partitions" % p_max)
--        for key in partitions.keys():
-+        for key in list(partitions.keys()):
-             self.debug("Partition["+str(key)+"]:\t"+repr(partitions[key]))
-
-         # Disabling STONITH to reduce test complexity for now
-         self.rsh(node, "crm_attribute -V -n stonith-enabled -v false")
-
--        for key in partitions.keys():
-+        for key in list(partitions.keys()):
-             self.isolate_partition(partitions[key])
-
-         count = 30
-@@ -1612,7 +1617,7 @@ class SplitBrainTest(CTSTest):
-         self.CM.partitions_expected = 1
-
-         # And heal them again
--        for key in partitions.keys():
-+        for key in list(partitions.keys()):
-             self.heal_partition(partitions[key])
-
-         # Wait for a single partition to form
-@@ -2247,11 +2252,11 @@ class RollingUpgradeTest(CTSTest):
-         if not self.is_applicable_common():
-             return None
-
--        if not self.Env.has_key("rpm-dir"):
-+        if not "rpm-dir" in self.Env.keys():
-             return None
--        if not self.Env.has_key("current-version"):
-+        if not "current-version" in self.Env.keys():
-             return None
--        if not self.Env.has_key("previous-version"):
-+        if not "previous-version" in self.Env.keys():
-             return None
-
-         return 1
-@@ -2305,7 +2310,7 @@ class BSC_AddResource(CTSTest):
-         if ":" in ip:
-             fields = ip.rpartition(":")
-             fields[2] = str(hex(int(fields[2], 16)+1))
--            print str(hex(int(f[2], 16)+1))
-+            print(str(hex(int(f[2], 16)+1)))
-         else:
-             fields = ip.rpartition('.')
-             fields[2] = str(int(fields[2])+1)
-@@ -3109,7 +3114,7 @@ class RemoteStonithd(CTSTest):
-         if not self.driver.is_applicable():
-             return False
-
--        if self.Env.has_key("DoFencing"):
-+        if "DoFencing" in self.Env.keys():
-             return self.Env["DoFencing"]
-
-         return True
-diff --git a/cts/OCFIPraTest.py b/cts/OCFIPraTest.py
-index 9900a62..03d964b 100755
---- a/cts/OCFIPraTest.py
-+++ b/cts/OCFIPraTest.py
-@@ -28,13 +28,13 @@ from cts.CTSvars import *
-
-
- def usage():
--    print "usage: " + sys.argv[0]  \
-+    print("usage: " + sys.argv[0]  \
-     +  " [-2]"\
-     +  " [--ipbase|-i first-test-ip]"\
-     +  " [--ipnum|-n test-ip-num]"\
-     +  " [--help|-h]"\
-     +  " [--perform|-p op]"\
--    +  " [number-of-iterations]"
-+    +  " [number-of-iterations]")
-     sys.exit(1)
-
-
-@@ -71,7 +71,7 @@ def log(towrite):
-     t = time.strftime("%Y/%m/%d_%H:%M:%S\t", time.localtime(time.time()))
-     logstr = t + " "+str(towrite)
-     syslog.syslog(logstr)
--    print logstr
-+    print(logstr)
-
- if __name__ == '__main__':
-     ra = "IPaddr"
-diff --git a/cts/cib_xml.py b/cts/cib_xml.py
-index 0bd963b..3d8f8d4 100644
---- a/cts/cib_xml.py
-+++ b/cts/cib_xml.py
-@@ -19,7 +19,7 @@ class XmlBase(CibBase):
-         text = '''<%s''' % self.tag
-         if self.name:
-             text += ''' id="%s"''' % (self.name)
--        for k in self.kwargs.keys():
-+        for k in list(self.kwargs.keys()):
-             text += ''' %s="%s"''' % (k, self.kwargs[k])
-
-         if not self.children:
-@@ -149,22 +149,22 @@ class Resource(XmlBase):
-     def constraints(self):
-         text = "<constraints>"
-
--        for k in self.scores.keys():
-+        for k in list(self.scores.keys()):
-             text += '''<rsc_location id="prefer-%s" rsc="%s">''' % (k, self.name)
-             text += self.scores[k].show()
-             text += '''</rsc_location>'''
-
--        for k in self.needs.keys():
-+        for k in list(self.needs.keys()):
-             text += '''<rsc_order id="%s-after-%s" first="%s" then="%s"''' % \
                (self.name, k, k, self.name)
-             kargs = self.needs[k]
--            for kw in kargs.keys():
-+            for kw in list(kargs.keys()):
-                 text += ''' %s="%s"''' % (kw, kargs[kw])
-             text += '''/>'''
-
--        for k in self.coloc.keys():
-+        for k in list(self.coloc.keys()):
-             text += '''<rsc_colocation id="%s-with-%s" rsc="%s" with-rsc="%s"''' % \
                (self.name, k, self.name, k)
-             kargs = self.coloc[k]
--            for kw in kargs.keys():
-+            for kw in list(kargs.keys()):
-                 text += ''' %s="%s"''' % (kw, kargs[kw])
-             text += '''/>'''
-
-@@ -179,13 +179,13 @@ class Resource(XmlBase):
-
-         if len(self.meta) > 0:
-             text += '''<meta_attributes id="%s-meta">''' % self.name
--            for p in self.meta.keys():
-+            for p in list(self.meta.keys()):
-                 text += '''<nvpair id="%s-%s" name="%s" value="%s"/>''' % \
                (self.name, p, p, self.meta[p])
-             text += '''</meta_attributes>'''
-
-         if len(self.param) > 0:
-             text += '''<instance_attributes id="%s-params">''' % self.name
--            for p in self.param.keys():
-+            for p in list(self.param.keys()):
-                 text += '''<nvpair id="%s-%s" name="%s" value="%s"/>''' % \
                (self.name, p, p, self.param[p])
-             text += '''</instance_attributes>'''
-
-@@ -219,7 +219,7 @@ class Group(Resource):
-
-         if len(self.meta) > 0:
-             text += '''<meta_attributes id="%s-meta">''' % self.name
--            for p in self.meta.keys():
-+            for p in list(self.meta.keys()):
-                 text += '''<nvpair id="%s-%s" name="%s" value="%s"/>''' % \
                (self.name, p, p, self.meta[p])
-             text += '''</meta_attributes>'''
-
-diff --git a/cts/environment.py b/cts/environment.py
-index 61d4211..4ed5ced 100644
---- a/cts/environment.py
-+++ b/cts/environment.py
-@@ -92,7 +92,7 @@ class Environment:
-
-     def dump(self):
-         keys = []
--        for key in self.data.keys():
-+        for key in list(self.data.keys()):
-             keys.append(key)
-
-         keys.sort()
-@@ -106,16 +106,19 @@ class Environment:
-         if key == "nodes":
-             return True
-
--        return self.data.has_key(key)
-+        return key in self.data
-
-     def __getitem__(self, key):
-+        if str(key) == "0":
-+            raise ValueError("Bad call to 'foo in X', should reference 'foo in \
                X.keys()' instead")
-+
-         if key == "nodes":
-             return self.Nodes
-
-         elif key == "Name":
-             return self.get_stack_short()
-
--        elif self.data.has_key(key):
-+        elif key in self.data:
-             return self.data[key]
-
-         else:
-@@ -175,12 +178,12 @@ class Environment:
-             self.data["Stack"] = "corosync (plugin v0)"
-
-         else:
--            print "Unknown stack: "+name
-+            raise ValueError("Unknown stack: "+name)
-             sys.exit(1)
-
-     def get_stack_short(self):
-         # Create the Cluster Manager object
--        if not self.data.has_key("Stack"):
-+        if not "Stack" in self.data:
-             return "unknown"
-
-         elif self.data["Stack"] == "heartbeat":
-@@ -202,12 +205,12 @@ class Environment:
-             return "crm-plugin-v0"
-
-         else:
--            LogFactory().log("Unknown stack: "+self.data["stack"])
--            sys.exit(1)
-+            LogFactory().log("Unknown stack: "+self["stack"])
-+            raise ValueError("Unknown stack: "+self["stack"])
-
-     def detect_syslog(self):
-         # Detect syslog variant
--        if not self.has_key("syslogd"):
-+        if not "syslogd" in self.data:
-             if self["have_systemd"]:
-                 # Systemd
-                 self["syslogd"] = self.rsh(self.target, "systemctl list-units | \
                grep syslog.*\.service.*active.*running | sed 's:.service.*::'", \
                stdout=1).strip()
-@@ -215,13 +218,13 @@ class Environment:
-                 # SYS-V
-                 self["syslogd"] = self.rsh(self.target, "chkconfig --list | grep \
                syslog.*on | awk '{print $1}' | head -n 1", stdout=1).strip()
-
--            if not self.has_key("syslogd") or not self["syslogd"]:
-+            if not "syslogd" in self.data or not self["syslogd"]:
-                 # default
-                 self["syslogd"] = "rsyslog"
-
-     def detect_at_boot(self):
-         # Detect if the cluster starts at boot
--        if not self.has_key("at-boot"):
-+        if not "at-boot" in self.data:
-             atboot = 0
-
-             if self["have_systemd"]:
-@@ -237,7 +240,7 @@ class Environment:
-
-     def detect_ip_offset(self):
-         # Try to determin an offset for IPaddr resources
--        if self["CIBResource"] and not self.has_key("IPBase"):
-+        if self["CIBResource"] and not "IPBase" in self.data:
-             network=self.rsh(self.target, "ip addr | grep inet | grep -v -e link -e \
                inet6 -e '/32' -e ' lo' | awk '{print $2}'", stdout=1).strip()
-             self["IPBase"] = self.rsh(self.target, "nmap -sn -n %s | grep 'scan \
report' | awk '{print $NF}' | sed 's:(::' | sed 's:)::' | sort -V | tail -n 1" % \
                network, stdout=1).strip()
-             if not self["IPBase"]:
-@@ -261,7 +264,7 @@ class Environment:
-
-     def validate(self):
-         if len(self["nodes"]) < 1:
--            print "No nodes specified!"
-+            print("No nodes specified!")
-             sys.exit(1)
-
-     def discover(self):
-@@ -276,7 +279,7 @@ class Environment:
-                 break;
-         self["cts-master"] = master
-
--        if not self.has_key("have_systemd"):
-+        if not "have_systemd" in self.data:
-             self["have_systemd"] = not self.rsh(self.target, "systemctl \
                list-units")
-
-         self.detect_syslog()
-@@ -390,7 +393,7 @@ class Environment:
-                     self["DoStonith"]=1
-                     self["stonith-type"] = "fence_openstack"
-
--                    print "Obtaining OpenStack credentials from the current \
                environment"
-+                    print("Obtaining OpenStack credentials from the current \
                environment")
-                     self["stonith-params"] = \
                "region=%s,tenant=%s,auth=%s,user=%s,password=%s" % (
-                         os.environ['OS_REGION_NAME'],
-                         os.environ['OS_TENANT_NAME'],
-@@ -403,7 +406,7 @@ class Environment:
-                     self["DoStonith"]=1
-                     self["stonith-type"] = "fence_rhevm"
-
--                    print "Obtaining RHEV-M credentials from the current \
                environment"
-+                    print("Obtaining RHEV-M credentials from the current \
                environment")
-                     self["stonith-params"] = \
                "login=%s,passwd=%s,ipaddr=%s,ipport=%s,ssl=1,shell_timeout" % (
-                         os.environ['RHEVM_USERNAME'],
-                         os.environ['RHEVM_PASSWORD'],
-@@ -442,7 +445,7 @@ class Environment:
-                 try:
-                     float(args[i+1])
-                 except ValueError:
--                    print ("--xmit-loss parameter should be float")
-+                    print("--xmit-loss parameter should be float")
-                     self.usage(args[i+1])
-                 skipthis=1
-                 self["XmitLoss"] = args[i+1]
-@@ -451,7 +454,7 @@ class Environment:
-                 try:
-                     float(args[i+1])
-                 except ValueError:
--                    print ("--recv-loss parameter should be float")
-+                    print("--recv-loss parameter should be float")
-                     self.usage(args[i+1])
-                 skipthis=1
-                 self["RecvLoss"] = args[i+1]
-@@ -503,7 +506,7 @@ class Environment:
-                     self["DoStonith"]=1
-                     self["stonith-type"] = "fence_rhevm"
-
--                    print "Obtaining RHEV-M credentials from the current \
                environment"
-+                    print("Obtaining RHEV-M credentials from the current \
                environment")
-                     self["stonith-params"] = \
                "login=%s,passwd=%s,ipaddr=%s,ipport=%s,ssl=1,shell_timeout" % (
-                         os.environ['RHEVM_USERNAME'],
-                         os.environ['RHEVM_PASSWORD'],
-@@ -605,7 +608,7 @@ class Environment:
-                 skipthis=1
-                 (name, value) = args[i+1].split('=')
-                 self[name] = value
--                print "Setting %s = %s" % (name, value)
-+                print("Setting %s = %s" % (name, value))
-
-             elif args[i] == "--help":
-                 self.usage(args[i], 0)
-@@ -622,52 +625,52 @@ class Environment:
-
-     def usage(self, arg, status=1):
-         if status:
--            print "Illegal argument %s" % arg
--        print "usage: " + sys.argv[0] +" [options] number-of-iterations"
--        print "\nCommon options: "
--        print "\t [--nodes 'node list']        list of cluster nodes separated by \
                whitespace"
--        print "\t [--group | -g 'name']        use the nodes listed in the named \
                DSH group (~/.dsh/groups/$name)"
--        print "\t [--limit-nodes max]          only use the first 'max' cluster \
                nodes supplied with --nodes"
--        print "\t [--stack (v0|v1|cman|corosync|heartbeat|openais)]    which \
                cluster stack is installed"
--        print "\t [--list-tests]               list the valid tests"
--        print "\t [--benchmark]                add the timing information"
--        print "\t "
--        print "Options that CTS will usually auto-detect correctly: "
--        print "\t [--logfile path]             where should the test software look \
                for logs from cluster nodes"
--        print "\t [--syslog-facility name]     which syslog facility should the \
                test software log to"
--        print "\t [--at-boot (1|0)]            does the cluster software start at \
                boot time"
--        print "\t [--test-ip-base ip]          offset for generated IP address \
                resources"
--        print "\t "
--        print "Options for release testing: "
--        print "\t [--populate-resources | -r]  generate a sample configuration"
--        print "\t [--choose name]              run only the named test"
--        print "\t [--stonith (1 | 0 | yes | no | rhcs | ssh)]"
--        print "\t [--once]                     run all valid tests once"
--        print "\t "
--        print "Additional (less common) options: "
--        print "\t [--clobber-cib | -c ]        erase any existing configuration"
--        print "\t [--outputfile path]          optional location for the test \
                software to write logs to"
--        print "\t [--trunc]                    truncate logfile before starting"
--        print "\t [--xmit-loss lost-rate(0.0-1.0)]"
--        print "\t [--recv-loss lost-rate(0.0-1.0)]"
--        print "\t [--standby (1 | 0 | yes | no)]"
--        print "\t [--fencing (1 | 0 | yes | no | rhcs | lha | openstack )]"
--        print "\t [--stonith-type type]"
--        print "\t [--stonith-args name=value]"
--        print "\t [--bsc]"
--        print "\t [--no-loop-tests]            dont run looping/time-based tests"
--        print "\t [--no-unsafe-tests]          dont run tests that are unsafe for \
                use with ocfs2/drbd"
--        print "\t [--valgrind-tests]           include tests using valgrind"
--        print "\t [--experimental-tests]       include experimental tests"
--        print "\t [--container-tests]          include pacemaker_remote tests that \
                run in lxc container resources"
--        print "\t [--oprofile 'node list']     list of cluster nodes to run \
                oprofile on]"
--        print "\t [--qarsh]                    use the QARSH backdoor to access \
                nodes instead of SSH"
--        print "\t [--docker]                   Indicates nodes are docker nodes."
--        print "\t [--seed random_seed]"
--        print "\t [--set option=value]"
--        print "\t "
--        print "\t Example: "
--        print "\t    python sys.argv[0] -g virt1 --stack cs -r --stonith ssh \
                --schema pacemaker-1.0 500"
-+            print("Illegal argument %s" % arg)
-+        print("usage: " + sys.argv[0] +" [options] number-of-iterations")
-+        print("\nCommon options: ")
-+        print("\t [--nodes 'node list']        list of cluster nodes separated by \
                whitespace")
-+        print("\t [--group | -g 'name']        use the nodes listed in the named \
                DSH group (~/.dsh/groups/$name)")
-+        print("\t [--limit-nodes max]          only use the first 'max' cluster \
                nodes supplied with --nodes")
-+        print("\t [--stack (v0|v1|cman|corosync|heartbeat|openais)]    which \
                cluster stack is installed")
-+        print("\t [--list-tests]               list the valid tests")
-+        print("\t [--benchmark]                add the timing information")
-+        print("\t ")
-+        print("Options that CTS will usually auto-detect correctly: ")
-+        print("\t [--logfile path]             where should the test software look \
                for logs from cluster nodes")
-+        print("\t [--syslog-facility name]     which syslog facility should the \
                test software log to")
-+        print("\t [--at-boot (1|0)]            does the cluster software start at \
                boot time")
-+        print("\t [--test-ip-base ip]          offset for generated IP address \
                resources")
-+        print("\t ")
-+        print("Options for release testing: ")
-+        print("\t [--populate-resources | -r]  generate a sample configuration")
-+        print("\t [--choose name]              run only the named test")
-+        print("\t [--stonith (1 | 0 | yes | no | rhcs | ssh)]")
-+        print("\t [--once]                     run all valid tests once")
-+        print("\t ")
-+        print("Additional (less common) options: ")
-+        print("\t [--clobber-cib | -c ]        erase any existing configuration")
-+        print("\t [--outputfile path]          optional location for the test \
                software to write logs to")
-+        print("\t [--trunc]                    truncate logfile before starting")
-+        print("\t [--xmit-loss lost-rate(0.0-1.0)]")
-+        print("\t [--recv-loss lost-rate(0.0-1.0)]")
-+        print("\t [--standby (1 | 0 | yes | no)]")
-+        print("\t [--fencing (1 | 0 | yes | no | rhcs | lha | openstack )]")
-+        print("\t [--stonith-type type]")
-+        print("\t [--stonith-args name=value]")
-+        print("\t [--bsc]")
-+        print("\t [--no-loop-tests]            dont run looping/time-based tests")
-+        print("\t [--no-unsafe-tests]          dont run tests that are unsafe for \
                use with ocfs2/drbd")
-+        print("\t [--valgrind-tests]           include tests using valgrind")
-+        print("\t [--experimental-tests]       include experimental tests")
-+        print("\t [--container-tests]          include pacemaker_remote tests that \
                run in lxc container resources")
-+        print("\t [--oprofile 'node list']     list of cluster nodes to run \
                oprofile on]")
-+        print("\t [--qarsh]                    use the QARSH backdoor to access \
                nodes instead of SSH")
-+        print("\t [--docker]                   Indicates nodes are docker nodes.")
-+        print("\t [--seed random_seed]")
-+        print("\t [--set option=value]")
-+        print("\t ")
-+        print("\t Example: ")
-+        print("\t    python sys.argv[0] -g virt1 --stack cs -r --stonith ssh \
                --schema pacemaker-1.0 500")
-
-         sys.exit(status)
-
-diff --git a/cts/logging.py b/cts/logging.py
-index 8afa611..08da44a 100644
---- a/cts/logging.py
-+++ b/cts/logging.py
-@@ -22,7 +22,7 @@ Licensed under the GNU GPL.
- # along with this program; if not, write to the Free Software
- # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA.
-
--import types, string, sys, time, os
-+import string, sys, time, os
-
- class Logger:
-     TimeFormat = "%b %d %H:%M:%S\t"
-@@ -47,7 +47,7 @@ class StdErrLog(Logger):
-
-     def __call__(self, lines):
-         t = time.strftime(Logger.TimeFormat, time.localtime(time.time()))
--        if isinstance(lines, types.StringType):
-+        if isinstance(lines, basestring):
-             sys.__stderr__.writelines([t, lines, "\n"])
-         else:
-             for line in lines:
-@@ -71,7 +71,7 @@ class FileLog(Logger):
-         fd = open(self.logfile, "a")
-         t = time.strftime(Logger.TimeFormat, time.localtime(time.time()))
-
--        if isinstance(lines, types.StringType):
-+        if isinstance(lines, basestring):
-             fd.writelines([t, self.hostname, self.source, lines, "\n"])
-         else:
-             for line in lines:
-diff --git a/cts/patterns.py b/cts/patterns.py
-index 493b690..3cdce2f 100644
---- a/cts/patterns.py
-+++ b/cts/patterns.py
-@@ -67,9 +67,9 @@ class BasePatterns:
-         }
-
-     def get_component(self, key):
--        if self.components.has_key(key):
-+        if key in self.components:
-             return self.components[key]
--        print "Unknown component '%s' for %s" % (key, self.name)
-+        print("Unknown component '%s' for %s" % (key, self.name))
-         return []
-
-     def get_patterns(self, key):
-@@ -87,12 +87,12 @@ class BasePatterns:
-     def __getitem__(self, key):
-         if key == "Name":
-             return self.name
--        elif self.commands.has_key(key):
-+        elif key in self.commands:
-             return self.commands[key]
--        elif self.search.has_key(key):
-+        elif key in self.search:
-             return self.search[key]
-         else:
--            print "Unknown template '%s' for %s" % (key, self.name)
-+            print("Unknown template '%s' for %s" % (key, self.name))
-             return None
-
- class crm_lha(BasePatterns):
-@@ -489,9 +489,9 @@ class PatternSelector:
-             crm_mcp_docker(name)
-
-     def get_variant(self, variant):
--        if patternvariants.has_key(variant):
-+        if variant in patternvariants:
-             return patternvariants[variant]
--        print "defaulting to crm-base for %s" % variant
-+        print("defaulting to crm-base for %s" % variant)
-         return self.base
-
-     def get_patterns(self, variant, kind):
-@@ -532,7 +532,7 @@ if __name__ == '__main__':
-            template = args[i+1]
-
-        else:
--           print "Illegal argument " + args[i]
-+           print("Illegal argument " + args[i])
-
-
--    print PatternSelector(kind)[template]
-+    print(PatternSelector(kind)[template])
-diff --git a/cts/remote.py b/cts/remote.py
-index b32b028..040b48a 100644
---- a/cts/remote.py
-+++ b/cts/remote.py
-@@ -147,7 +147,7 @@ class RemoteExec:
-         sysname = args[0]
-         command = args[1]
-
--        #print "sysname: %s, us: %s" % (sysname, self.OurNode)
-+        #print("sysname: %s, us: %s" % (sysname, self.OurNode))
-         if sysname == None or string.lower(sysname) == self.OurNode or sysname == \
                "localhost":
-             ret = command
-         else:
-@@ -164,7 +164,7 @@ class RemoteExec:
-             self.logger.debug(args)
-
-     def call_async(self, node, command, completionDelegate=None):
--        #if completionDelegate: print "Waiting for %d on %s: %s" % (proc.pid, node, \
                command)
-+        #if completionDelegate: print("Waiting for %d on %s: %s" % (proc.pid, node, \
                command))
-         aproc = AsyncRemoteCmd(node, self._cmd([node, command]), \
                completionDelegate=completionDelegate)
-         aproc.start()
-         return aproc
-@@ -186,7 +186,7 @@ class RemoteExec:
-         proc = Popen(self._cmd([node, command]),
-                      stdout = PIPE, stderr = PIPE, close_fds = True, shell = True)
-
--        #if completionDelegate: print "Waiting for %d on %s: %s" % (proc.pid, node, \
                command)
-+        #if completionDelegate: print("Waiting for %d on %s: %s" % (proc.pid, node, \
                command))
-         if not synchronous and proc.pid > 0 and not self.silent:
-             aproc = AsyncWaitProc(proc, node, command, \
                completionDelegate=completionDelegate)
-             aproc.start()
-@@ -257,14 +257,14 @@ class RemoteFactory:
-         return RemoteExec(RemoteFactory.rsh, silent)
-
-     def enable_docker(self):
--        print "Using DOCKER backend for connections to cluster nodes"
-+        print("Using DOCKER backend for connections to cluster nodes")
-
-         RemoteFactory.rsh.Command = "/usr/libexec/phd/docker/phd_docker_remote_cmd \
                "
-         RemoteFactory.rsh.CpCommand = "/usr/libexec/phd/docker/phd_docker_cp"
-
-     def enable_qarsh(self):
-         # http://nstraz.wordpress.com/2008/12/03/introducing-qarsh/
--        print "Using QARSH for connections to cluster nodes"
-+        print("Using QARSH for connections to cluster nodes")
-
-         RemoteFactory.rsh.Command = "qarsh -t 300 -l root"
-         RemoteFactory.rsh.CpCommand = "qacp -q"
-diff --git a/cts/watcher.py b/cts/watcher.py
-index 1182c8b..de032f7 100644
---- a/cts/watcher.py
-+++ b/cts/watcher.py
-@@ -73,7 +73,7 @@ for i in range(0, len(args)):
-         skipthis=1
-
- if not os.access(filename, os.R_OK):
--    print prefix + 'Last read: %d, limit=%d, count=%d - unreadable' % (0, limit, 0)
-+    print(prefix + 'Last read: %d, limit=%d, count=%d - unreadable' % (0, limit, \
                0))
-     sys.exit(1)
-
- logfile=open(filename, 'r')
-@@ -85,7 +85,7 @@ if offset != 'EOF':
-     if newsize >= offset:
-         logfile.seek(offset)
-     else:
--        print prefix + ('File truncated from %d to %d' % (offset, newsize))
-+        print(prefix + ('File truncated from %d to %d' % (offset, newsize)))
-         if (newsize*1.05) < offset:
-             logfile.seek(0)
-         # else: we probably just lost a few logs after a fencing op
-@@ -103,10 +103,10 @@ while True:
-     line = logfile.readline()
-     if not line: break
-
--    print line.strip()
-+    print(line.strip())
-     count += 1
-
--print prefix + 'Last read: %d, limit=%d, count=%d' % (logfile.tell(), limit, count)
-+print(prefix + 'Last read: %d, limit=%d, count=%d' % (logfile.tell(), limit, \
                count))
- logfile.close()
- """
-
-@@ -158,7 +158,7 @@ class FileObj(SearchObj):
-         SearchObj.__init__(self, filename, host, name)
-
-         if host is not None:
--            if not has_log_watcher.has_key(host):
-+            if not host in has_log_watcher:
-
-                 global log_watcher
-                 global log_watcher_bin
-@@ -381,7 +381,7 @@ class LogWatcher(RemoteExec):
-         else:
-             self.file_list.append(FileObj(self.filename))
-
--        # print "%s now has %d files" % (self.name, len(self.file_list))
-+        # print("%s now has %d files" % (self.name, len(self.file_list)))
-
-     def __del__(self):
-         if self.debug_level > 1: self.debug("Destroy")
-@@ -406,7 +406,7 @@ class LogWatcher(RemoteExec):
-             raise ValueError("No sources to read from")
-
-         pending = []
--        #print "%s waiting for %d operations" % (self.name, self.pending)
-+        #print("%s waiting for %d operations" % (self.name, self.pending))
-         for f in self.file_list:
-             t = f.harvest_async(self)
-             if t:
-@@ -418,7 +418,7 @@ class LogWatcher(RemoteExec):
-                 self.logger.log("%s: Aborting after 20s waiting for %s logging \
                commands" % (self.name, repr(t)))
-                 return
-
--        #print "Got %d lines" % len(self.line_cache)
-+        #print("Got %d lines" % len(self.line_cache))
-
-     def end(self):
-         for f in self.file_list:
-diff --git a/doc/Pacemaker_Explained/en-US/Ch-Resources.txt \
                b/doc/Pacemaker_Explained/en-US/Ch-Resources.txt
-index 5d5fa33..b0115fb 100644
---- a/doc/Pacemaker_Explained/en-US/Ch-Resources.txt
-+++ b/doc/Pacemaker_Explained/en-US/Ch-Resources.txt
-@@ -643,6 +643,16 @@ indexterm:[Action,Property,on-fail]
-  indexterm:[enabled,Action Property]
-  indexterm:[Action,Property,enabled]
-
-+|role
-+|
-+|This option only makes sense for recurring operations.  It restricts
-+ the operation to a specific role.  The truely paranoid can even
-+ specify +role=Stopped+ which allows the cluster to detect an admin
-+ that manually started cluster services.
-+ Allowed values: +Stopped+, +Started+, +Slave+, +Master+.
-+ indexterm:[role,Action Property]
-+ indexterm:[Action,Property,role]
-+
- |========================================================-
- [[s-operation-defaults]]
-diff --git a/fencing/commands.c b/fencing/commands.c
-index 0d2d614..bd3b27d 100644
---- a/fencing/commands.c
-+++ b/fencing/commands.c
-@@ -124,17 +124,7 @@ static xmlNode *stonith_construct_async_reply(async_command_t * \
                cmd, const char
- static gboolean
- is_action_required(const char *action, stonith_device_t *device)
- {
--    if(device == NULL) {
--        return FALSE;
--
--    } else if (device->required_actions == NULL) {
--        return FALSE;
--
--    } else if (strstr(device->required_actions, action)) {
--        return TRUE;
--    }
--
--    return FALSE;
-+    return device && device->automatic_unfencing && safe_str_eq(action, "on");
- }
-
- static int
-@@ -449,7 +439,6 @@ free_device(gpointer data)
-     free_xml(device->agent_metadata);
-     free(device->namespace);
-     free(device->on_target_actions);
--    free(device->required_actions);
-     free(device->agent);
-     free(device->id);
-     free(device);
-@@ -713,8 +702,6 @@ read_action_metadata(stonith_device_t *device)
-     for (lpc = 0; lpc < max; lpc++) {
-         const char *on_target = NULL;
-         const char *action = NULL;
--        const char *automatic = NULL;
--        const char *required = NULL;
-         xmlNode *match = getXpathResult(xpath, lpc);
-
-         CRM_LOG_ASSERT(match != NULL);
-@@ -722,8 +709,6 @@ read_action_metadata(stonith_device_t *device)
-
-         on_target = crm_element_value(match, "on_target");
-         action = crm_element_value(match, "name");
--        automatic = crm_element_value(match, "automatic");
--        required = crm_element_value(match, "required");
-
-         if(safe_str_eq(action, "list")) {
-             set_bit(device->flags, st_device_supports_list);
-@@ -731,17 +716,21 @@ read_action_metadata(stonith_device_t *device)
-             set_bit(device->flags, st_device_supports_status);
-         } else if(safe_str_eq(action, "reboot")) {
-             set_bit(device->flags, st_device_supports_reboot);
--        } else if(safe_str_eq(action, "on") && (crm_is_true(automatic))) {
--            /* this setting implies required=true for unfencing */
--            required = "true";
-+        } else if (safe_str_eq(action, "on")) {
-+            /* "automatic" means the cluster will unfence node when it joins */
-+            const char *automatic = crm_element_value(match, "automatic");
-+
-+            /* "required" is a deprecated synonym for "automatic" */
-+            const char *required = crm_element_value(match, "required");
-+
-+            if (crm_is_true(automatic) || crm_is_true(required)) {
-+                device->automatic_unfencing = TRUE;
-+            }
-         }
-
-         if (action && crm_is_true(on_target)) {
-             device->on_target_actions = add_action(device->on_target_actions, \
                action);
-         }
--        if (action && crm_is_true(required)) {
--            device->required_actions = add_action(device->required_actions, \
                action);
--        }
-     }
-
-     freeXpathObject(xpath);
-@@ -778,8 +767,7 @@ build_device_from_xml(xmlNode * msg)
-
-     value = crm_element_value(dev, "rsc_provides");
-     if (safe_str_eq(value, "unfencing")) {
--        /* if this agent requires unfencing, 'on' is considered a required action \
                */
--        device->required_actions = add_action(device->required_actions, "on");
-+        device->automatic_unfencing = TRUE;
-     }
-
-     if (is_action_required("on", device)) {
-@@ -1224,7 +1212,6 @@ stonith_device_action(xmlNode * msg, char **output)
-     } else if (device) {
-         cmd = create_async_command(msg);
-         if (cmd == NULL) {
--            free_device(device);
-             return -EPROTO;
-         }
-
-diff --git a/fencing/internal.h b/fencing/internal.h
-index 5fb8f9c..0f418ec 100644
---- a/fencing/internal.h
-+++ b/fencing/internal.h
-@@ -26,12 +26,13 @@ typedef struct stonith_device_s {
-
-     /*! list of actions that must execute on the target node. Used for unfencing */
-     char *on_target_actions;
--    char *required_actions;
-     GListPtr targets;
-     time_t targets_age;
-     gboolean has_attr_map;
-     /* should nodeid parameter for victim be included in agent arguments */
-     gboolean include_nodeid;
-+    /* whether the cluster should automatically unfence nodes with the device */
-+    gboolean automatic_unfencing;
-     guint priority;
-     guint active_pid;
-
-@@ -59,7 +60,8 @@ typedef struct stonith_device_s {
- enum st_remap_phase {
-     st_phase_requested = 0,
-     st_phase_off = 1,
--    st_phase_on = 2
-+    st_phase_on = 2,
-+    st_phase_max = 3
- };
-
- typedef struct remote_fencing_op_s {
-@@ -128,15 +130,9 @@ typedef struct remote_fencing_op_s {
-     /*! The current operation phase being executed */
-     enum st_remap_phase phase;
-
--    /* For phase 0 or 1 (requested action or a remapped "off"), required devices
--     * will be executed regardless of what topology level is being executed
--     * currently. For phase 1 (remapped "on"), required devices will not be
--     * attempted, because the cluster will execute them automatically when the
--     * node next joins the cluster.
--     */
--    /*! Lists of devices marked as required for each phase */
--    GListPtr required_list[3];
--    /*! The device list of all the devices at the current executing topology level. \
                */
-+    /*! Devices with automatic unfencing (always run if "on" requested, never if \
                remapped) */
-+    GListPtr automatic_list;
-+    /*! List of all devices at the currently executing topology level */
-     GListPtr devices_list;
-     /*! Current entry in the topology device list */
-     GListPtr devices;
-diff --git a/fencing/main.c b/fencing/main.c
-index 46d7352..c48e12d 100644
---- a/fencing/main.c
-+++ b/fencing/main.c
-@@ -553,7 +553,7 @@ remove_fencing_topology(xmlXPathObjectPtr xpathObj)
- }
-
- static void
--register_fencing_topology(xmlXPathObjectPtr xpathObj, gboolean force)
-+register_fencing_topology(xmlXPathObjectPtr xpathObj)
- {
-     int max = numXpathResults(xpathObj), lpc = 0;
-
-@@ -584,7 +584,7 @@ register_fencing_topology(xmlXPathObjectPtr xpathObj, gboolean \
                force)
- */
-
- static void
--fencing_topology_init(xmlNode * msg)
-+fencing_topology_init()
- {
-     xmlXPathObjectPtr xpathObj = NULL;
-     const char *xpath = "//" XML_TAG_FENCING_LEVEL;
-@@ -598,7 +598,7 @@ fencing_topology_init(xmlNode * msg)
-
-     /* Grab everything */
-     xpathObj = xpath_search(local_cib, xpath);
--    register_fencing_topology(xpathObj, TRUE);
-+    register_fencing_topology(xpathObj);
-
-     freeXpathObject(xpathObj);
- }
-@@ -931,7 +931,7 @@ update_fencing_topology(const char *event, xmlNode * msg)
-         xpath = "//" F_CIB_UPDATE_RESULT "//" XML_TAG_DIFF_ADDED "//" \
                XML_TAG_FENCING_LEVEL;
-         xpathObj = xpath_search(msg, xpath);
-
--        register_fencing_topology(xpathObj, FALSE);
-+        register_fencing_topology(xpathObj);
-         freeXpathObject(xpathObj);
-
-     } else if(format == 2) {
-@@ -969,7 +969,7 @@ update_fencing_topology(const char *event, xmlNode * msg)
-                     /* Nuclear option, all we have is the path and an id... not \
                enough to remove a specific entry */
-                     crm_info("Re-initializing fencing topology after %s operation \
                %d.%d.%d for %s",
-                              op, add[0], add[1], add[2], xpath);
--                    fencing_topology_init(NULL);
-+                    fencing_topology_init();
-                     return;
-                 }
-
-@@ -977,7 +977,7 @@ update_fencing_topology(const char *event, xmlNode * msg)
-                 /* Change to the topology in general */
-                 crm_info("Re-initializing fencing topology after top-level %s \
                operation  %d.%d.%d for %s",
-                          op, add[0], add[1], add[2], xpath);
--                fencing_topology_init(NULL);
-+                fencing_topology_init();
-                 return;
-
-             } else if (strstr(xpath, "/" XML_CIB_TAG_CONFIGURATION)) {
-@@ -989,7 +989,7 @@ update_fencing_topology(const char *event, xmlNode * msg)
-                 } else if(strcmp(op, "delete") == 0 || strcmp(op, "create") == 0) {
-                     crm_info("Re-initializing fencing topology after top-level %s \
                operation %d.%d.%d for %s.",
-                              op, add[0], add[1], add[2], xpath);
--                    fencing_topology_init(NULL);
-+                    fencing_topology_init();
-                     return;
-                 }
-
-@@ -1098,7 +1098,7 @@ update_cib_cache_cb(const char *event, xmlNode * msg)
-     } else if (stonith_enabled_saved == FALSE) {
-         crm_info("Updating stonith device and topology lists now that stonith is \
                enabled");
-         stonith_enabled_saved = TRUE;
--        fencing_topology_init(NULL);
-+        fencing_topology_init();
-         cib_devices_update();
-
-     } else {
-@@ -1114,7 +1114,7 @@ init_cib_cache_cb(xmlNode * msg, int call_id, int rc, xmlNode \
                * output, void *us
-     have_cib_devices = TRUE;
-     local_cib = copy_xml(output);
-
--    fencing_topology_init(msg);
-+    fencing_topology_init();
-     cib_devices_update();
- }
-
-@@ -1239,7 +1239,7 @@ st_peer_update_callback(enum crm_status_type type, crm_node_t \
                * node, const void
-          * This is a hack until we can send to a nodeid and/or we fix node name \
                lookups
-          * These messages are ignored in stonith_peer_callback()
-          */
--        xmlNode *query = query = create_xml_node(NULL, "stonith_command");
-+        xmlNode *query = create_xml_node(NULL, "stonith_command");
-
-         crm_xml_add(query, F_XML_TAGNAME, "stonith_command");
-         crm_xml_add(query, F_TYPE, T_STONITH_NG);
-diff --git a/fencing/remote.c b/fencing/remote.c
-index 2c00b5f..d741672 100644
---- a/fencing/remote.c
-+++ b/fencing/remote.c
-@@ -60,13 +60,13 @@ typedef struct device_properties_s {
-     /* The remaining members are indexed by the operation's "phase" */
-
-     /* Whether this device has been executed in each phase */
--    gboolean executed[3];
-+    gboolean executed[st_phase_max];
-     /* Whether this device is disallowed from executing in each phase */
--    gboolean disallowed[3];
-+    gboolean disallowed[st_phase_max];
-     /* Action-specific timeout for each phase */
--    int custom_action_timeout[3];
-+    int custom_action_timeout[st_phase_max];
-     /* Action-specific maximum random delay for each phase */
--    int delay_max[3];
-+    int delay_max[st_phase_max];
- } device_properties_t;
-
- typedef struct st_query_result_s {
-@@ -207,22 +207,6 @@ grab_peer_device(const remote_fencing_op_t *op, \
                st_query_result_t *peer,
-     return TRUE;
- }
-
--/*
-- * \internal
-- * \brief Free the list of required devices for a particular phase
-- *
-- * \param[in,out] op     Operation to modify
-- * \param[in]     phase  Phase to modify
-- */
--static void
--free_required_list(remote_fencing_op_t *op, enum st_remap_phase phase)
--{
--    if (op->required_list[phase]) {
--        g_list_free_full(op->required_list[phase], free);
--        op->required_list[phase] = NULL;
--    }
--}
--
- static void
- clear_remote_op_timers(remote_fencing_op_t * op)
- {
-@@ -268,9 +252,7 @@ free_remote_op(gpointer data)
-         g_list_free_full(op->devices_list, free);
-         op->devices_list = NULL;
-     }
--    free_required_list(op, st_phase_requested);
--    free_required_list(op, st_phase_off);
--    free_required_list(op, st_phase_on);
-+    g_list_free_full(op->automatic_list, free);
-     free(op);
- }
-
-@@ -323,10 +305,10 @@ op_phase_on(remote_fencing_op_t *op)
-     op->phase = st_phase_on;
-     strcpy(op->action, "on");
-
--    /* Any devices that are required for "on" will be automatically executed by
--     * the cluster when the node next joins, so we skip them here.
-+    /* Skip devices with automatic unfencing, because the cluster will handle it
-+     * when the node rejoins.
-      */
--    for (iter = op->required_list[op->phase]; iter != NULL; iter = iter->next) {
-+    for (iter = op->automatic_list; iter != NULL; iter = iter->next) {
-         GListPtr match = g_list_find_custom(op->devices_list, iter->data,
-                                             sort_strings);
-
-@@ -334,12 +316,8 @@ op_phase_on(remote_fencing_op_t *op)
-             op->devices_list = g_list_remove(op->devices_list, match->data);
-         }
-     }
--
--    /* We know this level will succeed, because phase 1 completed successfully
--     * and we ignore any errors from phase 2. So we can free the required list,
--     * which will keep them from being executed after the device list is done.
--     */
--    free_required_list(op, op->phase);
-+    g_list_free_full(op->automatic_list, free);
-+    op->automatic_list = NULL;
-
-     /* Rewind device list pointer */
-     op->devices = op->devices_list;
-@@ -659,28 +637,25 @@ topology_is_empty(stonith_topology_t *tp)
-
- /*
-  * \internal
-- * \brief Add a device to the required list for a particular phase
-+ * \brief Add a device to an operation's automatic unfencing list
-  *
-  * \param[in,out] op      Operation to modify
-- * \param[in]     phase   Phase to modify
-  * \param[in]     device  Device ID to add
-  */
- static void
--add_required_device(remote_fencing_op_t *op, enum st_remap_phase phase,
--                    const char *device)
-+add_required_device(remote_fencing_op_t *op, const char *device)
- {
--    GListPtr match  = g_list_find_custom(op->required_list[phase], device,
-+    GListPtr match  = g_list_find_custom(op->automatic_list, device,
-                                          sort_strings);
-
-     if (!match) {
--        op->required_list[phase] = g_list_prepend(op->required_list[phase],
--                                                  strdup(device));
-+        op->automatic_list = g_list_prepend(op->automatic_list, strdup(device));
-     }
- }
-
- /*
-  * \internal
-- * \brief Remove a device from the required list for the current phase
-+ * \brief Remove a device from the automatic unfencing list
-  *
-  * \param[in,out] op      Operation to modify
-  * \param[in]     device  Device ID to remove
-@@ -688,12 +663,11 @@ add_required_device(remote_fencing_op_t *op, enum \
                st_remap_phase phase,
- static void
- remove_required_device(remote_fencing_op_t *op, const char *device)
- {
--    GListPtr match = g_list_find_custom(op->required_list[op->phase], device,
-+    GListPtr match = g_list_find_custom(op->automatic_list, device,
-                                         sort_strings);
-
-     if (match) {
--        op->required_list[op->phase] = g_list_remove(op->required_list[op->phase],
--                                                     match->data);
-+        op->automatic_list = g_list_remove(op->automatic_list, match->data);
-     }
- }
-
-@@ -938,7 +912,7 @@ create_remote_stonith_op(const char *client, xmlNode * request, \
                gboolean peer)
-
-     op = calloc(1, sizeof(remote_fencing_op_t));
-
--    crm_element_value_int(request, F_STONITH_TIMEOUT, (int *)&(op->base_timeout));
-+    crm_element_value_int(request, F_STONITH_TIMEOUT, &(op->base_timeout));
-
-     if (peer && dev) {
-         op->id = crm_element_value_copy(dev, F_STONITH_REMOTE_OP_ID);
-@@ -974,7 +948,7 @@ create_remote_stonith_op(const char *client, xmlNode * request, \
                gboolean peer)
-     crm_element_value_int(request, F_STONITH_CALLOPTS, &call_options);
-     op->call_options = call_options;
-
--    crm_element_value_int(request, F_STONITH_CALLID, (int *)&(op->client_callid));
-+    crm_element_value_int(request, F_STONITH_CALLID, &(op->client_callid));
-
-     crm_trace("%s new stonith op: %s - %s of %s for %s",
-               (peer
-@@ -1352,14 +1326,17 @@ advance_op_topology(remote_fencing_op_t *op, const char \
                *device, xmlNode *msg,
-         op->devices = op->devices->next;
-     }
-
--    /* If this device was required, it's not anymore */
--    remove_required_device(op, device);
-+    /* Handle automatic unfencing if an "on" action was requested */
-+    if ((op->phase == st_phase_requested) && safe_str_eq(op->action, "on")) {
-+        /* If the device we just executed was required, it's not anymore */
-+        remove_required_device(op, device);
-
--    /* If there are no more devices at this topology level,
--     * run through any required devices not already executed
--     */
--    if (op->devices == NULL) {
--        op->devices = op->required_list[op->phase];
-+        /* If there are no more devices at this topology level, run through any
-+         * remaining devices with automatic unfencing
-+         */
-+        if (op->devices == NULL) {
-+            op->devices = op->automatic_list;
-+        }
-     }
-
-     if ((op->devices == NULL) && (op->phase == st_phase_off)) {
-@@ -1613,8 +1590,6 @@ parse_action_specific(xmlNode *xml, const char *peer, const \
                char *device,
-                       const char *action, remote_fencing_op_t *op,
-                       enum st_remap_phase phase, device_properties_t *props)
- {
--    int required;
--
-     props->custom_action_timeout[phase] = 0;
-     crm_element_value_int(xml, F_STONITH_ACTION_TIMEOUT,
-                           &props->custom_action_timeout[phase]);
-@@ -1630,20 +1605,16 @@ parse_action_specific(xmlNode *xml, const char *peer, const \
                char *device,
-                   peer, device, props->delay_max[phase], action);
-     }
-
--    required = 0;
--    crm_element_value_int(xml, F_STONITH_DEVICE_REQUIRED, &required);
--    if (required) {
--        /* If the action is marked as required, add the device to the
--         * operation's list of required devices for this phase. We use this
--         * for unfencing when executing a topology. In phase 0 (requested
--         * action) or phase 1 (remapped "off"), required devices get executed
--         * regardless of their topology level; in phase 2 (remapped "on"),
--         * required devices are not attempted, because the cluster will
--         * execute them automatically later.
--         */
--        crm_trace("Peer %s requires device %s to execute for action %s",
--                  peer, device, action);
--        add_required_device(op, phase, device);
-+    /* Handle devices with automatic unfencing */
-+    if (safe_str_eq(action, "on")) {
-+        int required = 0;
-+
-+        crm_element_value_int(xml, F_STONITH_DEVICE_REQUIRED, &required);
-+        if (required) {
-+            crm_trace("Peer %s requires device %s to execute for action %s",
-+                      peer, device, action);
-+            add_required_device(op, device);
-+        }
-     }
-
-     /* If a reboot is remapped to off+on, it's possible that a node is allowed
-diff --git a/include/crm/cib.h b/include/crm/cib.h
-index cb465bf..306706e 100644
---- a/include/crm/cib.h
-+++ b/include/crm/cib.h
-@@ -136,6 +136,13 @@ typedef struct cib_api_operations_s {
-                                    void *user_data, const char *callback_name,
-                                    void (*callback) (xmlNode *, int, int, xmlNode \
                *, void *));
-
-+    gboolean (*register_callback_full)(cib_t *cib, int call_id, int timeout,
-+                                       gboolean only_success, void *user_data,
-+                                       const char *callback_name,
-+                                       void (*callback)(xmlNode *, int, int,
-+                                                        xmlNode *, void *),
-+                                       void (*free_func)(void *));
-+
- } cib_api_operations_t;
-
- struct cib_s {
-diff --git a/include/crm/cib/internal.h b/include/crm/cib/internal.h
-index 431a2bd..adc2faf 100644
---- a/include/crm/cib/internal.h
-+++ b/include/crm/cib/internal.h
-@@ -106,7 +106,7 @@ typedef struct cib_callback_client_s {
-     void *user_data;
-     gboolean only_success;
-     struct timer_rec_s *timer;
--
-+    void (*free_func)(void *);
- } cib_callback_client_t;
-
- struct timer_rec_s {
-@@ -137,6 +137,13 @@ int cib_native_register_notification(cib_t * cib, const char \
                *callback, int enab
- gboolean cib_client_register_callback(cib_t * cib, int call_id, int timeout, \
                gboolean only_success,
-                                       void *user_data, const char *callback_name,
-                                       void (*callback) (xmlNode *, int, int, \
                xmlNode *, void *));
-+gboolean cib_client_register_callback_full(cib_t *cib, int call_id,
-+                                           int timeout, gboolean only_success,
-+                                           void *user_data,
-+                                           const char *callback_name,
-+                                           void (*callback)(xmlNode *, int, int,
-+                                                            xmlNode *, void *),
-+                                           void (*free_func)(void *));
-
- int cib_process_query(const char *op, int options, const char *section, xmlNode * \
                req,
-                       xmlNode * input, xmlNode * existing_cib, xmlNode ** \
                result_cib,
-diff --git a/include/crm/common/ipc.h b/include/crm/common/ipc.h
-index db83b09..d6ceda2 100644
---- a/include/crm/common/ipc.h
-+++ b/include/crm/common/ipc.h
-@@ -75,7 +75,7 @@ long crm_ipc_read(crm_ipc_t * client);
- const char *crm_ipc_buffer(crm_ipc_t * client);
- uint32_t crm_ipc_buffer_flags(crm_ipc_t * client);
- const char *crm_ipc_name(crm_ipc_t * client);
--int crm_ipc_default_buffer_size(void);
-+unsigned int crm_ipc_default_buffer_size(void);
-
- /* Utils */
- xmlNode *create_hello_message(const char *uuid, const char *client_name,
-diff --git a/include/crm/common/ipcs.h b/include/crm/common/ipcs.h
-index b43fc53..d825912 100644
---- a/include/crm/common/ipcs.h
-+++ b/include/crm/common/ipcs.h
-@@ -110,7 +110,7 @@ void crm_ipcs_send_ack(crm_client_t * c, uint32_t request, \
                uint32_t flags,
-                        const char *tag, const char *function, int line);
-
- /* when max_send_size is 0, default ipc buffer size is used */
--ssize_t crm_ipc_prepare(uint32_t request, xmlNode * message, struct iovec **result, \
                int32_t max_send_size);
-+ssize_t crm_ipc_prepare(uint32_t request, xmlNode * message, struct iovec ** \
                result, uint32_t max_send_size);
- ssize_t crm_ipcs_send(crm_client_t * c, uint32_t request, xmlNode * message, enum \
                crm_ipc_flags flags);
- ssize_t crm_ipcs_sendv(crm_client_t * c, struct iovec *iov, enum crm_ipc_flags \
                flags);
- xmlNode *crm_ipcs_recv(crm_client_t * c, void *data, size_t size, uint32_t * id, \
                uint32_t * flags);
-diff --git a/lib/cib/cib_client.c b/lib/cib/cib_client.c
-index b13323e..f7a19b8 100644
---- a/lib/cib/cib_client.c
-+++ b/lib/cib/cib_client.c
-@@ -198,6 +198,11 @@ cib_destroy_op_callback(gpointer data)
-         g_source_remove(blob->timer->ref);
-     }
-     free(blob->timer);
-+
-+    if (blob->user_data && blob->free_func) {
-+        blob->free_func(blob->user_data);
-+    }
-+
-     free(blob);
- }
-
-@@ -327,10 +332,15 @@ cib_new(void)
-     return cib_native_new();
- }
-
--/* this is backwards...
--   cib_*_new should call this not the other way around
-+/*
-+ * \internal
-+ * \brief Create a generic CIB connection instance
-+ *
-+ * \return Newly allocated and initialized cib_t instance
-+ *
-+ * \note This is called by each variant's cib_*_new() function before setting
-+ *       variant-specific values.
-  */
--
- cib_t *
- cib_new_variant(void)
- {
-@@ -364,6 +374,7 @@ cib_new_variant(void)
-     new_cib->cmds->add_notify_callback = cib_client_add_notify_callback;
-     new_cib->cmds->del_notify_callback = cib_client_del_notify_callback;
-     new_cib->cmds->register_callback = cib_client_register_callback;
-+    new_cib->cmds->register_callback_full = cib_client_register_callback_full;
-
-     new_cib->cmds->noop = cib_client_noop;
-     new_cib->cmds->ping = cib_client_ping;
-@@ -545,6 +556,19 @@ cib_client_register_callback(cib_t * cib, int call_id, int \
                timeout, gboolean onl
-                              void *user_data, const char *callback_name,
-                              void (*callback) (xmlNode *, int, int, xmlNode *, void \
                *))
- {
-+    return cib_client_register_callback_full(cib, call_id, timeout,
-+                                             only_success, user_data,
-+                                             callback_name, callback, NULL);
-+}
-+
-+gboolean
-+cib_client_register_callback_full(cib_t *cib, int call_id, int timeout,
-+                                  gboolean only_success, void *user_data,
-+                                  const char *callback_name,
-+                                  void (*callback)(xmlNode *, int, int,
-+                                                   xmlNode *, void *),
-+                                  void (*free_func)(void *))
-+{
-     cib_callback_client_t *blob = NULL;
-
-     if (call_id < 0) {
-@@ -553,6 +577,9 @@ cib_client_register_callback(cib_t * cib, int call_id, int \
                timeout, gboolean onl
-         } else {
-             crm_warn("CIB call failed: %s", pcmk_strerror(call_id));
-         }
-+        if (user_data && free_func) {
-+            free_func(user_data);
-+        }
-         return FALSE;
-     }
-
-@@ -561,6 +588,7 @@ cib_client_register_callback(cib_t * cib, int call_id, int \
                timeout, gboolean onl
-     blob->only_success = only_success;
-     blob->user_data = user_data;
-     blob->callback = callback;
-+    blob->free_func = free_func;
-
-     if (timeout > 0) {
-         struct timer_rec_s *async_timer = NULL;
-diff --git a/lib/cib/cib_utils.c b/lib/cib/cib_utils.c
-index d321517..4dc65aa 100644
---- a/lib/cib/cib_utils.c
-+++ b/lib/cib/cib_utils.c
-@@ -624,12 +624,6 @@ cib_native_callback(cib_t * cib, xmlNode * msg, int call_id, \
                int rc)
- {
-     xmlNode *output = NULL;
-     cib_callback_client_t *blob = NULL;
--    cib_callback_client_t local_blob;
--
--    local_blob.id = NULL;
--    local_blob.callback = NULL;
--    local_blob.user_data = NULL;
--    local_blob.only_success = FALSE;
-
-     if (msg != NULL) {
-         crm_element_value_int(msg, F_CIB_RC, &rc);
-@@ -638,16 +632,8 @@ cib_native_callback(cib_t * cib, xmlNode * msg, int call_id, \
                int rc)
-     }
-
-     blob = g_hash_table_lookup(cib_op_callback_table, GINT_TO_POINTER(call_id));
--
--    if (blob != NULL) {
--        local_blob = *blob;
--        blob = NULL;
--
--        remove_cib_op_callback(call_id, FALSE);
--
--    } else {
-+    if (blob == NULL) {
-         crm_trace("No callback found for call %d", call_id);
--        local_blob.callback = NULL;
-     }
-
-     if (cib == NULL) {
-@@ -659,15 +645,20 @@ cib_native_callback(cib_t * cib, xmlNode * msg, int call_id, \
                int rc)
-         rc = pcmk_ok;
-     }
-
--    if (local_blob.callback != NULL && (rc == pcmk_ok || local_blob.only_success == \
                FALSE)) {
--        crm_trace("Invoking callback %s for call %d", crm_str(local_blob.id), \
                call_id);
--        local_blob.callback(msg, call_id, rc, output, local_blob.user_data);
-+    if (blob && blob->callback && (rc == pcmk_ok || blob->only_success == FALSE)) {
-+        crm_trace("Invoking callback %s for call %d", crm_str(blob->id), call_id);
-+        blob->callback(msg, call_id, rc, output, blob->user_data);
-
-     } else if (cib && cib->op_callback == NULL && rc != pcmk_ok) {
-         crm_warn("CIB command failed: %s", pcmk_strerror(rc));
-         crm_log_xml_debug(msg, "Failed CIB Update");
-     }
-
-+    /* This may free user_data, so do it after the callback */
-+    if (blob) {
-+        remove_cib_op_callback(call_id, FALSE);
-+    }
-+
-     if (cib && cib->op_callback != NULL) {
-         crm_trace("Invoking global callback for call %d", call_id);
-         cib->op_callback(msg, call_id, rc, output);
-diff --git a/lib/cluster/legacy.c b/lib/cluster/legacy.c
-index d93613d..e9905f6 100644
---- a/lib/cluster/legacy.c
-+++ b/lib/cluster/legacy.c
-@@ -52,6 +52,21 @@ void *ais_ipc_ctx = NULL;
-
- hdb_handle_t ais_ipc_handle = 0;
-
-+static bool valid_cman_name(const char *name, uint32_t nodeid)
-+{
-+    bool rc = TRUE;
-+
-+    /* Yes, %d, because that's what CMAN does */
-+    char *fakename = crm_strdup_printf("Node%d", nodeid);
-+
-+    if(crm_str_eq(fakename, name, TRUE)) {
-+        rc = FALSE;
-+        crm_notice("Ignoring inferred name from cman: %s", fakename);
-+    }
-+    free(fakename);
-+    return rc;
-+}
-+
- static gboolean
- plugin_get_details(uint32_t * id, char **uname)
- {
-@@ -361,6 +376,7 @@ cman_event_callback(cman_handle_t handle, void *privdata, int \
                reason, int arg)
-                          arg ? "retained" : "still lost");
-             }
-
-+            memset(cman_nodes, 0, MAX_NODES * sizeof(cman_node_t));
-             rc = cman_get_nodes(pcmk_cman_handle, MAX_NODES, &node_count, \
                cman_nodes);
-             if (rc < 0) {
-                 crm_err("Couldn't query cman node list: %d %d", rc, errno);
-@@ -369,6 +385,7 @@ cman_event_callback(cman_handle_t handle, void *privdata, int \
                reason, int arg)
-
-             for (lpc = 0; lpc < node_count; lpc++) {
-                 crm_node_t *peer = NULL;
-+                const char *name = NULL;
-
-                 if (cman_nodes[lpc].cn_nodeid == 0) {
-                     /* Never allow node ID 0 to be considered a member #315711 */
-@@ -376,7 +393,11 @@ cman_event_callback(cman_handle_t handle, void *privdata, int \
                reason, int arg)
-                     continue;
-                 }
-
--                peer = crm_get_peer(cman_nodes[lpc].cn_nodeid, \
                cman_nodes[lpc].cn_name);
-+                if(valid_cman_name(cman_nodes[lpc].cn_name, \
                cman_nodes[lpc].cn_nodeid)) {
-+                    name = cman_nodes[lpc].cn_name;
-+                }
-+
-+                peer = crm_get_peer(cman_nodes[lpc].cn_nodeid, name);
-                 if(cman_nodes[lpc].cn_member) {
-                     crm_update_peer_state(__FUNCTION__, peer, CRM_NODE_MEMBER, \
                crm_peer_seq);
-
-@@ -631,15 +652,17 @@ cman_node_name(uint32_t nodeid)
-
-     cman = cman_init(NULL);
-     if (cman != NULL && cman_is_active(cman)) {
--        us.cn_name[0] = 0;
-+
-+        memset(&us, 0, sizeof(cman_node_t));
-         cman_get_node(cman, nodeid, &us);
--        name = strdup(us.cn_name);
--        crm_info("Using CMAN node name %s for %u", name, nodeid);
--    }
-+        if(valid_cman_name(us.cn_name, nodeid)) {
-+            name = strdup(us.cn_name);
-+            crm_info("Using CMAN node name %s for %u", name, nodeid);
-+        }
-+     }
-
-     cman_finish(cman);
- #  endif
--
-     if (name == NULL) {
-         crm_debug("Unable to get node name for nodeid %u", nodeid);
-     }
-@@ -667,7 +690,6 @@ init_cs_connection_once(crm_cluster_t * cluster)
-             if (cluster_connect_cpg(cluster) == FALSE) {
-                 return FALSE;
-             }
--            cluster->uname = cman_node_name(0 /* CMAN_NODEID_US */ );
-             break;
-         case pcmk_cluster_heartbeat:
-             crm_info("Could not find an active corosync based cluster");
-diff --git a/lib/common/ipc.c b/lib/common/ipc.c
-index d71c54a..f4188ed 100644
---- a/lib/common/ipc.c
-+++ b/lib/common/ipc.c
-@@ -46,8 +46,8 @@ struct crm_ipc_response_header {
- };
-
- static int hdr_offset = 0;
--static int ipc_buffer_max = 0;
--static unsigned int pick_ipc_buffer(int max);
-+static unsigned int ipc_buffer_max = 0;
-+static unsigned int pick_ipc_buffer(unsigned int max);
-
- static inline void
- crm_ipc_init(void)
-@@ -60,7 +60,7 @@ crm_ipc_init(void)
-     }
- }
-
--int
-+unsigned int
- crm_ipc_default_buffer_size(void)
- {
-     return pick_ipc_buffer(0);
-@@ -91,7 +91,7 @@ generateReference(const char *custom1, const char *custom2)
-     since_epoch = calloc(1, reference_len);
-
-     if (since_epoch != NULL) {
--        sprintf(since_epoch, "%s-%s-%ld-%u",
-+        sprintf(since_epoch, "%s-%s-%lu-%u",
-                 local_cust1, local_cust2, (unsigned long)time(NULL), \
                ref_counter++);
-     }
-
-@@ -431,7 +431,7 @@ crm_ipcs_recv(crm_client_t * c, void *data, size_t size, \
                uint32_t * id, uint32_t
-         unsigned int size_u = 1 + header->size_uncompressed;
-         uncompressed = calloc(1, size_u);
-
--        crm_trace("Decompressing message data %d bytes into %d bytes",
-+        crm_trace("Decompressing message data %u bytes into %u bytes",
-                   header->size_compressed, size_u);
-
-         rc = BZ2_bzBuffToBuffDecompress(uncompressed, &size_u, text, \
                header->size_compressed, 1, 0);
-@@ -531,9 +531,9 @@ crm_ipcs_flush_events(crm_client_t * c)
- }
-
- ssize_t
--crm_ipc_prepare(uint32_t request, xmlNode * message, struct iovec ** result, \
                int32_t max_send_size)
-+crm_ipc_prepare(uint32_t request, xmlNode * message, struct iovec ** result, \
                uint32_t max_send_size)
- {
--    static int biggest = 0;
-+    static unsigned int biggest = 0;
-     struct iovec *iov;
-     unsigned int total = 0;
-     char *compressed = NULL;
-@@ -579,20 +579,18 @@ crm_ipc_prepare(uint32_t request, xmlNode * message, struct \
                iovec ** result, int
-
-             free(buffer);
-
--            if (header->size_compressed > biggest) {
--                biggest = 2 * QB_MAX(header->size_compressed, biggest);
--            }
-+            biggest = QB_MAX(header->size_compressed, biggest);
-
-         } else {
-             ssize_t rc = -EMSGSIZE;
-
-             crm_log_xml_trace(message, "EMSGSIZE");
--            biggest = 2 * QB_MAX(header->size_uncompressed, biggest);
-+            biggest = QB_MAX(header->size_uncompressed, biggest);
-
-             crm_err
--                ("Could not compress the message into less than the configured ipc \
                limit (%d bytes)."
--                 "Set PCMK_ipc_buffer to a higher value (%d bytes suggested)", \
                max_send_size,
--                 biggest);
-+                ("Could not compress the message (%u bytes) into less than the \
                configured ipc limit (%u bytes). "
-+                 "Set PCMK_ipc_buffer to a higher value (%u bytes suggested)",
-+                 header->size_uncompressed, max_send_size, 4 * biggest);
-
-             free(compressed);
-             free(buffer);
-@@ -656,7 +654,7 @@ crm_ipcs_sendv(crm_client_t * c, struct iovec * iov, enum \
                crm_ipc_flags flags)
-
-         rc = qb_ipcs_response_sendv(c->ipcs, iov, 2);
-         if (rc < header->qb.size) {
--            crm_notice("Response %d to %p[%d] (%d bytes) failed: %s (%d)",
-+            crm_notice("Response %d to %p[%d] (%u bytes) failed: %s (%d)",
-                        header->qb.id, c->ipcs, c->pid, header->qb.size, \
                pcmk_strerror(rc), rc);
-
-         } else {
-@@ -747,9 +745,9 @@ struct crm_ipc_s {
- };
-
- static unsigned int
--pick_ipc_buffer(int max)
-+pick_ipc_buffer(unsigned int max)
- {
--    static int global_max = 0;
-+    static unsigned int global_max = 0;
-
-     if(global_max == 0) {
-         const char *env = getenv("PCMK_ipc_buffer");
-@@ -925,7 +923,7 @@ crm_ipc_decompress(crm_ipc_t * client)
-         unsigned int new_buf_size = QB_MAX((hdr_offset + size_u), \
                client->max_buf_size);
-         char *uncompressed = calloc(1, new_buf_size);
-
--        crm_trace("Decompressing message data %d bytes into %d bytes",
-+        crm_trace("Decompressing message data %u bytes into %u bytes",
-                  header->size_compressed, size_u);
-
-         rc = BZ2_bzBuffToBuffDecompress(uncompressed + hdr_offset, &size_u,
-@@ -986,7 +984,7 @@ crm_ipc_read(crm_ipc_t * client)
-             return -EBADMSG;
-         }
-
--        crm_trace("Received %s event %d, size=%d, rc=%d, text: %.100s",
-+        crm_trace("Received %s event %d, size=%u, rc=%d, text: %.100s",
-                   client->name, header->qb.id, header->qb.size, client->msg_size,
-                   client->buffer + hdr_offset);
-
-@@ -1166,9 +1164,9 @@ crm_ipc_send(crm_ipc_t * client, xmlNode * message, enum \
                crm_ipc_flags flags, in
-
-     if(header->size_compressed) {
-         if(factor < 10 && (client->max_buf_size / 10) < (rc / factor)) {
--            crm_notice("Compressed message exceeds %d0%% of the configured ipc \
                limit (%d bytes), "
--                       "consider setting PCMK_ipc_buffer to %d or higher",
--                       factor, client->max_buf_size, 2*client->max_buf_size);
-+            crm_notice("Compressed message exceeds %d0%% of the configured ipc \
                limit (%u bytes), "
-+                       "consider setting PCMK_ipc_buffer to %u or higher",
-+                       factor, client->max_buf_size, 2 * client->max_buf_size);
-             factor++;
-         }
-     }
-@@ -1211,7 +1209,7 @@ crm_ipc_send(crm_ipc_t * client, xmlNode * message, enum \
                crm_ipc_flags flags, in
-     if (rc > 0) {
-         struct crm_ipc_response_header *hdr = (struct crm_ipc_response_header \
                *)(void*)client->buffer;
-
--        crm_trace("Received response %d, size=%d, rc=%ld, text: %.200s", \
                hdr->qb.id, hdr->qb.size,
-+        crm_trace("Received response %d, size=%u, rc=%ld, text: %.200s", \
                hdr->qb.id, hdr->qb.size,
-                   rc, crm_ipc_buffer(client));
-
-         if (reply) {
-diff --git a/lib/common/xml.c b/lib/common/xml.c
-index 8eed245..299c7bf 100644
---- a/lib/common/xml.c
-+++ b/lib/common/xml.c
-@@ -3821,6 +3821,7 @@ crm_xml_dump(xmlNode * data, int options, char **buffer, int \
                *offset, int *max,
-     if(data == NULL) {
-         *offset = 0;
-         *max = 0;
-+        return;
-     }
- #if 0
-     if (is_not_set(options, xml_log_option_filtered)) {
-@@ -5621,7 +5622,7 @@ update_validation(xmlNode ** xml_blob, int *best, int max, \
                gboolean transform, g
-                 break;
-
-             } else if (known_schemas[lpc].transform == NULL) {
--                crm_notice("%s-style configuration is also valid for %s",
-+                crm_debug("%s-style configuration is also valid for %s",
-                            known_schemas[lpc].name, known_schemas[next].name);
-
-                 if (validate_with(xml, next, to_logs)) {
-diff --git a/lib/lrmd/lrmd_client.c b/lib/lrmd/lrmd_client.c
-index f5e34ee..42bdf2b 100644
---- a/lib/lrmd/lrmd_client.c
-+++ b/lib/lrmd/lrmd_client.c
-@@ -1369,7 +1369,7 @@ lrmd_api_disconnect(lrmd_t * lrmd)
- {
-     lrmd_private_t *native = lrmd->private;
-
--    crm_info("Disconnecting from lrmd service");
-+    crm_info("Disconnecting from %d lrmd service", native->type);
-     switch (native->type) {
-         case CRM_CLIENT_IPC:
-             lrmd_ipc_disconnect(lrmd);
-diff --git a/lib/services/dbus.c b/lib/services/dbus.c
-index e2efecb..d42affe 100644
---- a/lib/services/dbus.c
-+++ b/lib/services/dbus.c
-@@ -329,9 +329,6 @@ pcmk_dbus_lookup_cb(DBusPendingCall *pending, void *user_data)
-
-     pcmk_dbus_lookup_result(reply, user_data);
-
--    if(pending) {
--        dbus_pending_call_unref(pending);
--    }
-     if(reply) {
-         dbus_message_unref(reply);
-     }
-diff --git a/lib/services/services.c b/lib/services/services.c
-index 7e2b9f7..3f40078 100644
---- a/lib/services/services.c
-+++ b/lib/services/services.c
-@@ -150,6 +150,7 @@ resources_action_create(const char *name, const char *standard, \
                const char *prov
-
-     op = calloc(1, sizeof(svc_action_t));
-     op->opaque = calloc(1, sizeof(svc_action_private_t));
-+    op->opaque->pending = NULL;
-     op->rsc = strdup(name);
-     op->action = strdup(action);
-     op->interval = interval;
-@@ -158,6 +159,7 @@ resources_action_create(const char *name, const char *standard, \
                const char *prov
-     op->agent = strdup(agent);
-     op->sequence = ++operations;
-     op->flags = flags;
-+
-     if (asprintf(&op->id, "%s_%s_%d", name, action, interval) == -1) {
-         goto return_error;
-     }
-@@ -335,6 +337,7 @@ services_action_create_generic(const char *exec, const char \
                *args[])
-
-     op->opaque->exec = strdup(exec);
-     op->opaque->args[0] = strdup(exec);
-+    op->opaque->pending = NULL;
-
-     for (cur_arg = 1; args && args[cur_arg - 1]; cur_arg++) {
-         op->opaque->args[cur_arg] = strdup(args[cur_arg - 1]);
-@@ -361,17 +364,17 @@ services_set_op_pending(svc_action_t *op, DBusPendingCall \
                *pending)
- {
-     if (op->opaque->pending && (op->opaque->pending != pending)) {
-         if (pending) {
--            crm_info("Lost pending DBus call (%p)", op->opaque->pending);
-+            crm_info("Lost pending %s DBus call (%p)", op->id, \
                op->opaque->pending);
-         } else {
--            crm_trace("Done with pending DBus call (%p)", op->opaque->pending);
-+            crm_info("Done with pending %s DBus call (%p)", op->id, \
                op->opaque->pending);
-         }
-         dbus_pending_call_unref(op->opaque->pending);
-     }
-     op->opaque->pending = pending;
-     if (pending) {
--        crm_trace("Updated pending DBus call (%p)", pending);
-+        crm_info("Updated pending %s DBus call (%p)", op->id, pending);
-     } else {
--        crm_trace("Cleared pending DBus call");
-+        crm_info("Cleared pending %s DBus call", op->id);
-     }
- }
- #endif
-@@ -457,7 +460,7 @@ services_action_free(svc_action_t * op)
- gboolean
- cancel_recurring_action(svc_action_t * op)
- {
--    crm_info("Cancelling operation %s", op->id);
-+    crm_info("Cancelling %s operation %s", op->standard, op->id);
-
-     if (recurring_actions) {
-         g_hash_table_remove(recurring_actions, op->id);
-diff --git a/lib/services/systemd.c b/lib/services/systemd.c
-index e1e1bc9..ca56915 100644
---- a/lib/services/systemd.c
-+++ b/lib/services/systemd.c
-@@ -189,16 +189,13 @@ systemd_loadunit_cb(DBusPendingCall *pending, void *user_data)
-         reply = dbus_pending_call_steal_reply(pending);
-     }
-
--    if(op) {
--        crm_trace("Got result: %p for %p for %s, %s", reply, pending, op->rsc, \
                op->action);
--    } else {
--        crm_trace("Got result: %p for %p", reply, pending);
--    }
-+    crm_trace("Got result: %p for %p / %p for %s", reply, pending, \
                op->opaque->pending, op->id);
-+
-+    CRM_LOG_ASSERT(pending == op->opaque->pending);
-+    services_set_op_pending(op, NULL);
-+
-     systemd_loadunit_result(reply, user_data);
-
--    if(pending) {
--        dbus_pending_call_unref(pending);
--    }
-     if(reply) {
-         dbus_message_unref(reply);
-     }
-@@ -209,6 +206,7 @@ systemd_unit_by_name(const gchar * arg_name, svc_action_t *op)
- {
-     DBusMessage *msg;
-     DBusMessage *reply = NULL;
-+    DBusPendingCall* pending = NULL;
-     char *name = NULL;
-
- /*
-@@ -249,7 +247,11 @@ systemd_unit_by_name(const gchar * arg_name, svc_action_t *op)
-         return munit;
-     }
-
--    pcmk_dbus_send(msg, systemd_proxy, systemd_loadunit_cb, op, op? op->timeout : \
                DBUS_TIMEOUT_USE_DEFAULT);
-+    pending = pcmk_dbus_send(msg, systemd_proxy, systemd_loadunit_cb, op, \
                op->timeout);
-+    if(pending) {
-+        services_set_op_pending(op, pending);
-+    }
-+
-     dbus_message_unref(msg);
-     return NULL;
- }
-@@ -459,23 +461,12 @@ systemd_async_dispatch(DBusPendingCall *pending, void \
                *user_data)
-         reply = dbus_pending_call_steal_reply(pending);
-     }
-
--    if(op) {
--        crm_trace("Got result: %p for %p for %s, %s", reply, pending, op->rsc, \
                op->action);
--        if (pending == op->opaque->pending) {
--            op->opaque->pending = NULL;
--        } else {
--            crm_info("Received unexpected reply for pending DBus call (%p vs %p)",
--                     op->opaque->pending, pending);
--        }
--        systemd_exec_result(reply, op);
-+    crm_trace("Got result: %p for %p for %s, %s", reply, pending, op->rsc, \
                op->action);
-
--    } else {
--        crm_trace("Got result: %p for %p", reply, pending);
--    }
-+    CRM_LOG_ASSERT(pending == op->opaque->pending);
-+    services_set_op_pending(op, NULL);
-+    systemd_exec_result(reply, op);
-
--    if(pending) {
--        dbus_pending_call_unref(pending);
--    }
-     if(reply) {
-         dbus_message_unref(reply);
-     }
-@@ -536,7 +527,6 @@ systemd_unit_exec_with_unit(svc_action_t * op, const char *unit)
-             free(state);
-             return op->rc == PCMK_OCF_OK;
-         } else if (pending) {
--            dbus_pending_call_ref(pending);
-             services_set_op_pending(op, pending);
-             return TRUE;
-         }
-diff --git a/lib/services/upstart.c b/lib/services/upstart.c
-index 31b875b..eb8cfa8 100644
---- a/lib/services/upstart.c
-+++ b/lib/services/upstart.c
-@@ -322,10 +322,7 @@ upstart_job_check(const char *name, const char *state, void \
                *userdata)
-     }
-
-     if (op->synchronous == FALSE) {
--        if (op->opaque->pending) {
--            dbus_pending_call_unref(op->opaque->pending);
--        }
--        op->opaque->pending = NULL;
-+        services_set_op_pending(op, NULL);
-         operation_finalize(op);
-     }
- }
-@@ -392,6 +389,7 @@ upstart_async_dispatch(DBusPendingCall *pending, void \
                *user_data)
-     if(pending) {
-         reply = dbus_pending_call_steal_reply(pending);
-     }
-+
-     if(pcmk_dbus_find_error(op->action, pending, reply, &error)) {
-
-         /* ignore "already started" or "not running" errors */
-@@ -419,11 +417,10 @@ upstart_async_dispatch(DBusPendingCall *pending, void \
                *user_data)
-         }
-     }
-
-+    CRM_LOG_ASSERT(pending == op->opaque->pending);
-+    services_set_op_pending(op, NULL);
-     operation_finalize(op);
-
--    if(pending) {
--        dbus_pending_call_unref(pending);
--    }
-     if(reply) {
-         dbus_message_unref(reply);
-     }
-@@ -483,8 +480,7 @@ upstart_job_exec(svc_action_t * op, gboolean synchronous)
-                 free(state);
-                 return op->rc == PCMK_OCF_OK;
-             } else if (pending) {
--                dbus_pending_call_ref(pending);
--                op->opaque->pending = pending;
-+                services_set_op_pending(op, pending);
-                 return TRUE;
-             }
-             return FALSE;
-@@ -527,8 +523,7 @@ upstart_job_exec(svc_action_t * op, gboolean synchronous)
-         free(job);
-
-         if(pending) {
--            dbus_pending_call_ref(pending);
--            op->opaque->pending = pending;
-+            services_set_op_pending(op, pending);
-             return TRUE;
-         }
-         return FALSE;
-diff --git a/lrmd/ipc_proxy.c b/lrmd/ipc_proxy.c
-index 72d83c4..9427393 100644
---- a/lrmd/ipc_proxy.c
-+++ b/lrmd/ipc_proxy.c
-@@ -165,14 +165,14 @@ ipc_proxy_forward_client(crm_client_t *ipc_proxy, xmlNode \
                *xml)
-      */
-
-     if (safe_str_eq(msg_type, "event")) {
--        crm_info("Sending event to %s", ipc_client->id);
-+        crm_trace("Sending event to %s", ipc_client->id);
-         rc = crm_ipcs_send(ipc_client, 0, msg, crm_ipc_server_event);
-
-     } else if (safe_str_eq(msg_type, "response")) {
-         int msg_id = 0;
-
-         crm_element_value_int(xml, F_LRMD_IPC_MSG_ID, &msg_id);
--        crm_info("Sending response to %d - %s", ipc_client->request_id, \
                ipc_client->id);
-+        crm_trace("Sending response to %d - %s", ipc_client->request_id, \
                ipc_client->id);
-         rc = crm_ipcs_send(ipc_client, msg_id, msg, FALSE);
-
-         CRM_LOG_ASSERT(msg_id == ipc_client->request_id);
-diff --git a/lrmd/pacemaker_remote.service.in b/lrmd/pacemaker_remote.service.in
-index 7ec42b4..15e61fb 100644
---- a/lrmd/pacemaker_remote.service.in
-+++ b/lrmd/pacemaker_remote.service.in
-@@ -9,7 +9,6 @@ WantedBy=multi-user.target
- Type=simple
- KillMode=process
- NotifyAccess=none
--SysVStartPriority™
- EnvironmentFile=-/etc/sysconfig/pacemaker
-
- ExecStart=@sbindir@/pacemaker_remoted
-diff --git a/mcp/pacemaker.service.in b/mcp/pacemaker.service.in
-index 2ef9454..9b0a824 100644
---- a/mcp/pacemaker.service.in
-+++ b/mcp/pacemaker.service.in
-@@ -20,7 +20,6 @@ WantedBy=multi-user.target
- Type=simple
- KillMode=process
- NotifyAccess=main
--SysVStartPriority™
- EnvironmentFile=-@sysconfdir@/sysconfig/pacemaker
- EnvironmentFile=-@sysconfdir@/sysconfig/sbd
- SuccessExitStatus0
-diff --git a/pengine/allocate.c b/pengine/allocate.c
-index ec5a18d..c2e56f9 100644
---- a/pengine/allocate.c
-+++ b/pengine/allocate.c
-@@ -1495,11 +1495,12 @@ stage6(pe_working_set_t * data_set)
-         }
-     }
-
--    if (last_stonith) {
--        order_actions(last_stonith, done, pe_order_implies_then);
-
--    } else if (dc_fence) {
-+    if (dc_fence) {
-         order_actions(dc_down, done, pe_order_implies_then);
-+
-+    } else if (last_stonith) {
-+        order_actions(last_stonith, done, pe_order_implies_then);
-     }
-
-     order_actions(done, all_stopped, pe_order_implies_then);
-diff --git a/pengine/test10/rec-node-14.dot b/pengine/test10/rec-node-14.dot
-index 395fa89..5ceef92 100644
---- a/pengine/test10/rec-node-14.dot
-+++ b/pengine/test10/rec-node-14.dot
-@@ -2,9 +2,9 @@
- "all_stopped" [ style=bold color="green" fontcolor="orange" ]
- "stonith 'reboot' node1" -> "stonith 'reboot' node3" [ style = bold]
- "stonith 'reboot' node1" [ style=bold color="green" fontcolor="black"]
-+"stonith 'reboot' node2" -> "stonith_complete" [ style = bold]
- "stonith 'reboot' node2" [ style=bold color="green" fontcolor="black"]
- "stonith 'reboot' node3" -> "stonith 'reboot' node2" [ style = bold]
--"stonith 'reboot' node3" -> "stonith_complete" [ style = bold]
- "stonith 'reboot' node3" [ style=bold color="green" fontcolor="black"]
- "stonith_complete" -> "all_stopped" [ style = bold]
- "stonith_complete" [ style=bold color="green" fontcolor="orange" ]
-diff --git a/pengine/test10/rec-node-14.exp b/pengine/test10/rec-node-14.exp
-index 58bb5ca..0e5e163 100644
---- a/pengine/test10/rec-node-14.exp
-+++ b/pengine/test10/rec-node-14.exp
-@@ -39,7 +39,7 @@
-      </action_set>
-      <inputs>
-        <trigger>
--        <crm_event id="5" operation="stonith" operation_key="stonith-node3-reboot" \
                on_node="node3" on_node_uuid="uuid3"/>
-+        <crm_event id="4" operation="stonith" operation_key="stonith-node2-reboot" \
                on_node="node2" on_node_uuid="uuid2"/>
-        </trigger>
-      </inputs>
-    </synapse>
-diff --git a/pengine/test10/stonith-0.dot b/pengine/test10/stonith-0.dot
-index 29cdd59..8ad32fd 100644
---- a/pengine/test10/stonith-0.dot
-+++ b/pengine/test10/stonith-0.dot
-@@ -71,13 +71,13 @@ digraph "g" {
- "stonith 'reboot' c001n03" -> "ocf_192.168.100.181_stop_0 c001n03" [ style = bold]
- "stonith 'reboot' c001n03" -> "ocf_192.168.100.183_stop_0 c001n03" [ style = bold]
- "stonith 'reboot' c001n03" -> "rsc_c001n07_stop_0 c001n03" [ style = bold]
-+"stonith 'reboot' c001n03" -> "stonith_complete" [ style = bold]
- "stonith 'reboot' c001n03" [ style=bold color="green" fontcolor="black"]
- "stonith 'reboot' c001n05" -> "group-1_stop_0" [ style = bold]
- "stonith 'reboot' c001n05" -> "ocf_192.168.100.181_stop_0 c001n05" [ style = bold]
- "stonith 'reboot' c001n05" -> "ocf_192.168.100.183_stop_0 c001n05" [ style = bold]
- "stonith 'reboot' c001n05" -> "rsc_c001n05_stop_0 c001n05" [ style = bold]
- "stonith 'reboot' c001n05" -> "stonith 'reboot' c001n03" [ style = bold]
--"stonith 'reboot' c001n05" -> "stonith_complete" [ style = bold]
- "stonith 'reboot' c001n05" [ style=bold color="green" fontcolor="black"]
- "stonith_complete" -> "all_stopped" [ style = bold]
- "stonith_complete" -> "heartbeat_192.168.100.182_start_0 c001n02" [ style = bold]
-diff --git a/pengine/test10/stonith-0.exp b/pengine/test10/stonith-0.exp
-index 9d47215..a6695c9 100644
---- a/pengine/test10/stonith-0.exp
-+++ b/pengine/test10/stonith-0.exp
-@@ -394,7 +394,7 @@
-     </action_set>
-     <inputs>
-       <trigger>
--        <crm_event id="111" operation="stonith" \
operation_key="stonith-c001n05-reboot" on_node="c001n05" \
                on_node_uuid="52a5ea5e-86ee-442c-b251-0bc9825c517e"/>
-+        <crm_event id="110" operation="stonith" \
operation_key="stonith-c001n03-reboot" on_node="c001n03" \
                on_node_uuid="f5e1d2de-73da-432a-9d5c-37472253c2ee"/>
-       </trigger>
-     </inputs>
-   </synapse>
-diff --git a/pengine/test10/systemhealth1.dot b/pengine/test10/systemhealth1.dot
-index 28841b7..a29f519 100644
---- a/pengine/test10/systemhealth1.dot
-+++ b/pengine/test10/systemhealth1.dot
-@@ -1,8 +1,8 @@
- digraph "g" {
- "all_stopped" [ style=bold color="green" fontcolor="orange" ]
-+"stonith 'reboot' hs21c" -> "stonith_complete" [ style = bold]
- "stonith 'reboot' hs21c" [ style=bold color="green" fontcolor="black"]
- "stonith 'reboot' hs21d" -> "stonith 'reboot' hs21c" [ style = bold]
--"stonith 'reboot' hs21d" -> "stonith_complete" [ style = bold]
- "stonith 'reboot' hs21d" [ style=bold color="green" fontcolor="black"]
- "stonith_complete" -> "all_stopped" [ style = bold]
- "stonith_complete" [ style=bold color="green" fontcolor="orange" ]
-diff --git a/pengine/test10/systemhealth1.exp b/pengine/test10/systemhealth1.exp
-index 80a2329..aa2afe1 100644
---- a/pengine/test10/systemhealth1.exp
-+++ b/pengine/test10/systemhealth1.exp
-@@ -27,7 +27,7 @@
-     </action_set>
-     <inputs>
-       <trigger>
--        <crm_event id="4" operation="stonith" operation_key="stonith-hs21d-reboot" \
                on_node="hs21d" on_node_uuid="737318c6-0f92-4592-9754-45967d45aff7"/>
-+        <crm_event id="3" operation="stonith" operation_key="stonith-hs21c-reboot" \
                on_node="hs21c" on_node_uuid="c97a3ee5-02d8-4fad-a9fb-a79ae2b35549"/>
-       </trigger>
-     </inputs>
-   </synapse>
-diff --git a/pengine/test10/systemhealthm1.dot b/pengine/test10/systemhealthm1.dot
-index 28841b7..a29f519 100644
---- a/pengine/test10/systemhealthm1.dot
-+++ b/pengine/test10/systemhealthm1.dot
-@@ -1,8 +1,8 @@
- digraph "g" {
- "all_stopped" [ style=bold color="green" fontcolor="orange" ]
-+"stonith 'reboot' hs21c" -> "stonith_complete" [ style = bold]
- "stonith 'reboot' hs21c" [ style=bold color="green" fontcolor="black"]
- "stonith 'reboot' hs21d" -> "stonith 'reboot' hs21c" [ style = bold]
--"stonith 'reboot' hs21d" -> "stonith_complete" [ style = bold]
- "stonith 'reboot' hs21d" [ style=bold color="green" fontcolor="black"]
- "stonith_complete" -> "all_stopped" [ style = bold]
- "stonith_complete" [ style=bold color="green" fontcolor="orange" ]
-diff --git a/pengine/test10/systemhealthm1.exp b/pengine/test10/systemhealthm1.exp
-index 80a2329..aa2afe1 100644
---- a/pengine/test10/systemhealthm1.exp
-+++ b/pengine/test10/systemhealthm1.exp
-@@ -27,7 +27,7 @@
-     </action_set>
-     <inputs>
-       <trigger>
--        <crm_event id="4" operation="stonith" operation_key="stonith-hs21d-reboot" \
                on_node="hs21d" on_node_uuid="737318c6-0f92-4592-9754-45967d45aff7"/>
-+        <crm_event id="3" operation="stonith" operation_key="stonith-hs21c-reboot" \
                on_node="hs21c" on_node_uuid="c97a3ee5-02d8-4fad-a9fb-a79ae2b35549"/>
-       </trigger>
-     </inputs>
-   </synapse>
-diff --git a/pengine/test10/systemhealthn1.dot b/pengine/test10/systemhealthn1.dot
-index 28841b7..a29f519 100644
---- a/pengine/test10/systemhealthn1.dot
-+++ b/pengine/test10/systemhealthn1.dot
-@@ -1,8 +1,8 @@
- digraph "g" {
- "all_stopped" [ style=bold color="green" fontcolor="orange" ]
-+"stonith 'reboot' hs21c" -> "stonith_complete" [ style = bold]
- "stonith 'reboot' hs21c" [ style=bold color="green" fontcolor="black"]
- "stonith 'reboot' hs21d" -> "stonith 'reboot' hs21c" [ style = bold]
--"stonith 'reboot' hs21d" -> "stonith_complete" [ style = bold]
- "stonith 'reboot' hs21d" [ style=bold color="green" fontcolor="black"]
- "stonith_complete" -> "all_stopped" [ style = bold]
- "stonith_complete" [ style=bold color="green" fontcolor="orange" ]
-diff --git a/pengine/test10/systemhealthn1.exp b/pengine/test10/systemhealthn1.exp
-index 80a2329..aa2afe1 100644
---- a/pengine/test10/systemhealthn1.exp
-+++ b/pengine/test10/systemhealthn1.exp
-@@ -27,7 +27,7 @@
-     </action_set>
-     <inputs>
-       <trigger>
--        <crm_event id="4" operation="stonith" operation_key="stonith-hs21d-reboot" \
                on_node="hs21d" on_node_uuid="737318c6-0f92-4592-9754-45967d45aff7"/>
-+        <crm_event id="3" operation="stonith" operation_key="stonith-hs21c-reboot" \
                on_node="hs21c" on_node_uuid="c97a3ee5-02d8-4fad-a9fb-a79ae2b35549"/>
-       </trigger>
-     </inputs>
-   </synapse>
-diff --git a/pengine/test10/systemhealtho1.dot b/pengine/test10/systemhealtho1.dot
-index 28841b7..a29f519 100644
---- a/pengine/test10/systemhealtho1.dot
-+++ b/pengine/test10/systemhealtho1.dot
-@@ -1,8 +1,8 @@
- digraph "g" {
- "all_stopped" [ style=bold color="green" fontcolor="orange" ]
-+"stonith 'reboot' hs21c" -> "stonith_complete" [ style = bold]
- "stonith 'reboot' hs21c" [ style=bold color="green" fontcolor="black"]
- "stonith 'reboot' hs21d" -> "stonith 'reboot' hs21c" [ style = bold]
--"stonith 'reboot' hs21d" -> "stonith_complete" [ style = bold]
- "stonith 'reboot' hs21d" [ style=bold color="green" fontcolor="black"]
- "stonith_complete" -> "all_stopped" [ style = bold]
- "stonith_complete" [ style=bold color="green" fontcolor="orange" ]
-diff --git a/pengine/test10/systemhealtho1.exp b/pengine/test10/systemhealtho1.exp
-index 80a2329..aa2afe1 100644
---- a/pengine/test10/systemhealtho1.exp
-+++ b/pengine/test10/systemhealtho1.exp
-@@ -27,7 +27,7 @@
-     </action_set>
-     <inputs>
-       <trigger>
--        <crm_event id="4" operation="stonith" operation_key="stonith-hs21d-reboot" \
                on_node="hs21d" on_node_uuid="737318c6-0f92-4592-9754-45967d45aff7"/>
-+        <crm_event id="3" operation="stonith" operation_key="stonith-hs21c-reboot" \
                on_node="hs21c" on_node_uuid="c97a3ee5-02d8-4fad-a9fb-a79ae2b35549"/>
-       </trigger>
-     </inputs>
-   </synapse>
-diff --git a/pengine/test10/systemhealthp1.dot b/pengine/test10/systemhealthp1.dot
-index 28841b7..a29f519 100644
---- a/pengine/test10/systemhealthp1.dot
-+++ b/pengine/test10/systemhealthp1.dot
-@@ -1,8 +1,8 @@
- digraph "g" {
- "all_stopped" [ style=bold color="green" fontcolor="orange" ]
-+"stonith 'reboot' hs21c" -> "stonith_complete" [ style = bold]
- "stonith 'reboot' hs21c" [ style=bold color="green" fontcolor="black"]
- "stonith 'reboot' hs21d" -> "stonith 'reboot' hs21c" [ style = bold]
--"stonith 'reboot' hs21d" -> "stonith_complete" [ style = bold]
- "stonith 'reboot' hs21d" [ style=bold color="green" fontcolor="black"]
- "stonith_complete" -> "all_stopped" [ style = bold]
- "stonith_complete" [ style=bold color="green" fontcolor="orange" ]
-diff --git a/pengine/test10/systemhealthp1.exp b/pengine/test10/systemhealthp1.exp
-index 80a2329..aa2afe1 100644
---- a/pengine/test10/systemhealthp1.exp
-+++ b/pengine/test10/systemhealthp1.exp
-@@ -27,7 +27,7 @@
-     </action_set>
-     <inputs>
-       <trigger>
--        <crm_event id="4" operation="stonith" operation_key="stonith-hs21d-reboot" \
                on_node="hs21d" on_node_uuid="737318c6-0f92-4592-9754-45967d45aff7"/>
-+        <crm_event id="3" operation="stonith" operation_key="stonith-hs21c-reboot" \
                on_node="hs21c" on_node_uuid="c97a3ee5-02d8-4fad-a9fb-a79ae2b35549"/>
-       </trigger>
-     </inputs>
-   </synapse>
-diff --git a/tools/1node2heartbeat b/tools/1node2heartbeat
-deleted file mode 100755
-index b63a0c8..0000000
---- a/tools/1node2heartbeat
-+++ /dev/null
-@@ -1,326 +0,0 @@
--#!/usr/bin/python
--#
--#	Program to determine current list of enabled services for init state 3
--#	and create heartbeat CRM configuration for heartbeat to manage them
--#
--__copyright__='''
--Author: Alan Robertson	<alanr@unix.sh>
--Copyright (C) 2006 International Business Machines
--'''
--
--# This program is free software; you can redistribute it and/or
--# modify it under the terms of the GNU General Public License
--# as published by the Free Software Foundation; either version 2
--# of the License, or (at your option) any later version.
--#
--# This program is distributed in the hope that it will be useful,
--# but WITHOUT ANY WARRANTY; without even the implied warranty of
--# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
--# GNU General Public License for more details.
--#
--# You should have received a copy of the GNU General Public License
--# along with this program; if not, write to the Free Software
--# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA  02111-1307, USA.
--import os,re
--#
--#	Here's the plan:
--#	Find out the default run level
--#	Find out what (additional?) services are enabled in that run level
--#	Figure out which of them start after the network (or heartbeat?)
--#	Ignore heartbeat :-)
--#	Figure out which services supply the $services
--#		Look to see if the SUSE /etc/insserv.conf file exists
--#			If so, then scan it for who provides the $services
--#			defined by the LSB
--#		If we're on Red Hat, then make some Red Hat type assumptions
--#			(whatever those might be)
--#		If we're not, then make some generic assumptions...
--#	Scan the init scripts for their dependencies...
--#	Eliminate anything at or before 'network'.
--#	Create resources corresponding to all active services
--#	Include monitor actions for those services
--#	that can be started after 'network'
--#	Add the start-after dependencies
--#
--#	Things to consider doing in the future:
--#	Constrain them to only run on the local system?
--#	Put them all in a convenience group (no colocation, no ordering)
--#	Add start and stop timeouts
--
--ServiceKeywords = {}
--ServiceMap = {}
--ProvidesMap = {}
--RequiresMap = {}
--SkipMap = {'heartbeat': None,  'random': None}
--NoMonitor = {'microcode': None}
--PreReqs = ['network']
--IgnoreList = []
--sysname = os.uname()[1]
--InitDir = "/etc/init.d"
--
--def service_is_hb_compatible(service):
--  scriptname = os.path.join(InitDir, service)
--  command=scriptname + " status >/dev/null 2>&1";
--  rc = os.system(command)
--  return rc == 0
--
--def find_ordered_services(dir):
--  allscripts = os.listdir(dir)
--  allscripts.sort()
--  services = []
--  for entry in allscripts:
--    matchobj = re.match("S[0-9]+(.*)", entry)
--    if not matchobj:
--      continue
--    service = matchobj.group(1)
--    if SkipMap.has_key(service):
--      continue
--    if service_is_hb_compatible(service):
--      services.append(service)
--    else:
--      IgnoreList.append(service)
--  return services
--
--
--def register_services(initdir, services):
--  for service in services:
--    if not ServiceMap.has_key(service):
--       ServiceMap[service] = os.path.join(initdir, service)
--  for service in services:
--    script_dependency_scan(service, os.path.join(initdir, service), ServiceMap)
--
--#
--#	From the LSB version 3.1: "Comment Conventions for Init Scripts"
--#
--### BEGIN INIT INFO
--### END INIT INFO
--#
--# The delimiter lines may contain trailing whitespace, which shall be ignored.
--# All lines inside the block shall begin with a hash character '#' in the
--# first column, so the shell interprets them as comment lines which do not
--# affect operation of the script. The lines shall be of the form:
--# {keyword}: arg1 [arg2...]
--# with exactly one space character between the '#' and the keyword, with a
--# single exception. In lines following a line containing the Description
--# keyword, and until the next keyword or block ending delimiter is seen,
--# a line where the '#' is followed by more than one space or a tab
--# character shall be treated as a continuation of the previous line.
--#
--
--# Make this a class to avoid recompiling it for each script we scan.
--class pats:
--  begin=re.compile("###\s+BEGIN\s+INIT\s+INFO")
--  end=re.compile("###\s+END\s+INIT\s+INFO")
--  desc=re.compile("# Description:\s*(.*)", re.IGNORECASE)
--  desc_continue=re.compile("#(  +|\t)\s*(.*)")
--  keyword=re.compile("# ([^\s:]+):\s*(.*)\s*\Z")
--
--def script_keyword_scan(filename, servicename):
--  keywords = {}
--  ST_START=0
--  ST_INITINFO=1
--  ST_DESCRIPTION=1
--  description=""
--  state=ST_START
--
--  try:
--    fd = open(filename)
--  except IOError:
--    return keywords
--
--  while 1:
--    line = fd.readline()
--    if not line:
--      break
--
--    if state == ST_START:
--       if pats.begin.match(line):
--          state = ST_INITINFO
--       continue
--    if pats.end.match(line):
--      break
--
--    if state == ST_DESCRIPTION:
--      match = pats.desc_continue.match(line)
--      if match:
--        description += ("\n" + match.group(2))
--        continue
--      state = ST_INITINFO
--
--    match = pats.desc.match(line)
--    if match:
--      state = ST_DESCRIPTION
--      description = match.group(1)
--      continue
--
--    match = pats.keyword.match(line)
--    if match:
--      keywords[match.group(1)] = match.group(2)
--
--  # Clean up and return
--  fd.close()
--  if description != "":
--    keywords["Description"] = description
--  keywords["_PATHNAME_"] = filename
--  keywords["_RESOURCENAME_"] = "R_" + sysname + "_" + servicename
--  return keywords
--
--def script_dependency_scan(service, script, servicemap):
--  keywords=script_keyword_scan(script, service)
--  ServiceKeywords[service] = keywords
--
--SysServiceGuesses = {
--  '$local_fs':		['boot.localfs'],
--  '$network':		['network'],
--  '$named':		['named'],
--  '$portmap':		['portmap'],
--  '$remote_fs':		['nfs'],
--  '$syslog':		['syslog'],
--  '$netdaemons':	['portmap', 'inetd'],
--  '$time':		['ntp'],
--}
--
--#
--#	For specific versions of Linux, there are often better ways
--#	to do this...
--#
--#	(e.g., for SUSE Linux, one should look at /etc/insserv.conf file)
--#
--def map_sys_services(servicemap):
--  sysservicemap = {}
--  for sysserv in SysServiceGuesses.keys():
--    servlist = SysServiceGuesses[sysserv]
--    result = []
--    for service in servlist:
--      if servicemap.has_key(service):
--        result.append(service)
--
--    sysservicemap[sysserv] = result
--  return sysservicemap
--
--#
--#
--#
--def create_service_dependencies(servicekeywords, systemservicemap):
--  dependencies = {}
--  for service in servicekeywords.keys():
--    if not dependencies.has_key(service):
--      dependencies[service] = {}
--    for key in ('Required-Start', 'Should-Start'):
--      if not servicekeywords[service].has_key(key):
--        continue
--      for depserv in servicekeywords[service][key].split():
--        if systemservicemap.has_key(depserv):
--          sysserv = systemservicemap[depserv]
--          for serv in sysserv:
--            dependencies[service][serv] = None
--        else:
--          if servicekeywords.has_key(depserv):
--            dependencies[service][depserv] = None
--    if len(dependencies[service]) == 0:
--       del dependencies[service]
--  return dependencies
--
--#
--#	Modify the service name map to include all the mappings from
--#	'Provides' services to real service script names...
--#
--def map_script_services(sysservmap, servicekeywords):
--  for service in servicekeywords.keys():
--    if not servicekeywords[service].has_key('Provides'):
--      continue
--    for provided in servicekeywords[service]['Provides'].split():
--      if not sysservmap.has_key(provided):
--        sysservmap[provided] = []
--      sysservmap[provided].append(service)
--  return sysservmap
--
--def create_cib_update(keywords, depmap):
--  services =  keywords.keys()
--  services.sort()
--  result = ""
--  # Create the XML for the resources
--  result += '<cib>\n'
--  result += '<configuration>\n'
--  result += '<crm_config/>\n'
--  result += '<nodes/>\n'
--  result += '<resources>\n'
--  groupname="G_" + sysname + "_localinit"
--  result += ' <group id="'+groupname+'" ordered="0" collocated="0">\n'
--  for service in services:
--    rid = keywords[service]["_RESOURCENAME_"]
--    monid = "OPmon_" + sysname + '_' + service
--    result += \
--        '  <primitive id="' + rid + '" class="lsb" type="'+ service +	\
--        '">\n' + 							\
--        '   <instance_attributes/>\n' +					\
--        '   <operations>\n'
--    if  not NoMonitor.has_key(service):
--      result += \
--        '    <op id="' + monid + '" name="monitor" interval="30s" \
                timeout="30s"/>\n'
--    result += \
--        '   </operations>\n'						\
--        '  </primitive>\n'
--  result += ' </group>\n'
--  result += '</resources>\n'
--  services = depmap.keys()
--  services.sort()
--  result += '<constraints>\n'
--  for service in services:
--    rid = keywords[service]["_RESOURCENAME_"]
--    deps = depmap[service].keys()
--    deps.sort()
--    for dep in deps:
--      if not keywords.has_key(dep):
--        continue
--      depid = keywords[dep]["_RESOURCENAME_"]
--      orderid='O_' + sysname + '_' + service + '_' + dep
--      result += ' <rsc_order id="' + orderid + '" from="' + rid + \
--		'" to="' + depid + '" type="after"/>\n'
--  loc_id="Loc_" + sysname + "_localinit"
--  rule_id="LocRule_" + sysname + "_localinit"
--  expr_id="LocExp_" + sysname + "_localinit"
--
--  result += ' <rsc_location id="' + loc_id + '" rsc="' + groupname + '">\n'
--  result += '  <rule id="' + rule_id + '" score="-INFINITY">\n'
--  result += '   <expression attribute="#uname" id="' + expr_id +	\
--			'" operation="ne" value="' + sysname + '"/>\n'
--  result += '   </rule>\n'
--  result += ' </rsc_location>\n'
--  result += '</constraints>\n'
--  result += '</configuration>\n'
--  result += '<status/>\n'
--  result += '</cib>\n'
--  return result
--
--
--
--def remove_a_prereq(service, servicemap, keywords, deps):
--  if deps.has_key(service):
--    parents = deps[service].keys()
--    del deps[service]
--  else:
--    parents = []
--  if servicemap.has_key(service):
--    del servicemap[service]
--  if keywords.has_key(service):
--    del keywords[service]
--  for parent in parents:
--    if not deps.has_key(parent):
--      continue
--    remove_a_prereq(parent, servicemap, keywords, deps)
--
--
--def remove_important_prereqs(prereqs, servicemap, keywords, deps):
--  # Find everything these important prereqs need and get rid of them...
--  for service in prereqs:
--    remove_a_prereq(service, servicemap, keywords, deps)
--
--ServiceList = find_ordered_services(os.path.join(InitDir, "rc3.d"))
--register_services(InitDir, ServiceList)
--SysServiceMap = map_sys_services(ServiceMap)
--map_script_services(SysServiceMap, ServiceKeywords)
--ServiceDependencies = create_service_dependencies(ServiceKeywords,SysServiceMap)
--remove_important_prereqs(PreReqs, SysServiceMap, ServiceKeywords, \
                ServiceDependencies)
--
--print create_cib_update(ServiceKeywords, ServiceDependencies)
-diff --git a/tools/crm_commands.py.in b/tools/crm_commands.py.in
-deleted file mode 100644
-index c48d82c..0000000
---- a/tools/crm_commands.py.in
-+++ /dev/null
-@@ -1,132 +0,0 @@
--#
--#
--#	pingd OCF Resource Agent
--#	Records (in the CIB) the current number of ping nodes a
--#	   cluster node can connect to.
--#
--# Copyright (c) 2006 Andrew Beekhof
--#                    All Rights Reserved.
--#
--# This program is free software; you can redistribute it and/or modify
--# it under the terms of version 2 of the GNU General Public License as
--# published by the Free Software Foundation.
--#
--# This program is distributed in the hope that it would be useful, but
--# WITHOUT ANY WARRANTY; without even the implied warranty of
--# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
--#
--# Further, this software is distributed without any warranty that it is
--# free of the rightful claim of any third person regarding infringement
--# or the like.  Any license provided herein, whether implied or
--# otherwise, applies only to this software file.  Patent licenses, if
--# any, provided herein do not apply to combinations of this program with
--# other software, or any other product whatsoever.
--#
--# You should have received a copy of the GNU General Public License
--# along with this program; if not, write the Free Software Foundation,
--# Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
--#
--#######################################################################
--
--import crm_utils as utl
--
--class HelpRequest(Exception):
--    """Exception raised when a help listing is required."""
--
--class ReparseRequest(Exception):
--    """Exception raised when a command changed the command-line."""
--
--def up(*args, **cmdoptions):
--    l = len(utl.topic_stack)
--    if l > 1:
--	utl.topic_stack.pop()
--	utl.set_topic(utl.topic_stack[-1])
--    else:
--	utl.log_debug("Already at the top of the stack")
--
--def toggle_flag(*args, **cmdoptions):
--    flag = cmdoptions["flag"]
--    if utl.global_opts[flag]:
--	utl.global_opts[flag] = 0
--    else:
--	utl.global_opts[flag] = 1
--
--    return utl.global_opts[flag]
--
--def cd_(*args, **cmdoptions):
--    utl.log_dev("args: %s\nopts: %s" % (repr(args), repr(cmdoptions)))
--    if not cmdoptions["topic"]:
--	utl.log_err("No topic specified")
--	return 1
--
--    if cmdoptions["topic"]:
--	utl.set_topic(cmdoptions["topic"])
--    if args:
--	raise ReparseRequest()
--    if utl.crm_topic not in utl.topic_stack:
--	utl.topic_stack.append(cmdoptions["topic"])
--    if not utl.global_opts["interactive"]:
--	help(cmdoptions["topic"])
--    return 0
--
--def exit(*args, **cmdoptions):
--    sys.exit(0)
--
--def help(*args, **cmdoptions):
--    if args:
--	raise HelpRequest(args[0])
--    raise HelpRequest(utl.crm_topic)
--
--def debugstate(*args, **cmdoptions):
--    utl.log_info("Global Options: ")
--    for opt in utl.global_opts.keys():
--	utl.log_info(" * %s:\t%s" % (opt, utl.global_opts[opt]))
--    utl.log_info("Stack: "+repr(utl.topic_stack))
--    utl.log_info("Stack Head: "+utl.crm_topic)
--    return 0
--
--def do_list(*args, **cmdoptions):
--    topic = utl.crm_topic
--    if cmdoptions.has_key("topic") and cmdoptions["topic"]:
--	topic = cmdoptions["topic"]
--
--    utl.log_debug("Complete '%s' listing" % topic)
--    if topic == "resources":
--	utl.os_system("crm_resource -l", True)
--    elif topic == "nodes":
--	lines = utl.os_system("cibadmin -Q -o nodes", False)
--	for line in lines:
--	    if line.find("node ") >= 0:
--		print line.rstrip()
--    else:
--	utl.log_err("%s: Topic %s is not (yet) supported" % ("list", topic))
--	return 1
--    return 0
--
--def do_status(*args, **cmdoptions):
--    topic = utl.crm_topic
--    if cmdoptions.has_key("topic") and cmdoptions["topic"]:
--	topic = cmdoptions["topic"]
--
--    if topic == "resources":
--	if not args:
--	    utl.os_system("crm_resource -L", True)
--	for rsc in args:
--	    utl.os_system("crm_resource -W -r %s"%rsc, True)
--
--    elif topic == "nodes":
--	lines = utl.os_system("cibadmin -Q -o status", False)
--	for line in lines:
--	    line = line.rstrip()
--	    utl.log_dev("status line: "+line)
--	    if line.find("node_state ") >= 0:
--		if not args:
--		    print line
--		for node in args:
--		    if line.find(node) >= 0:
--			print line
--    else:
--	utl.log_err("Topic %s is not (yet) supported" % topic)
--	return 1
--
--    return 0
-diff --git a/tools/crm_mon.c b/tools/crm_mon.c
-index 0b71275..46a59d6 100644
---- a/tools/crm_mon.c
-+++ b/tools/crm_mon.c
-@@ -2715,6 +2715,7 @@ print_status(pe_working_set_t * data_set)
-                 } else {
-                     online_nodes = add_list_element(online_nodes, node_name);
-                 }
-+                free(node_name);
-                 continue;
-             }
-         } else {
-@@ -2727,6 +2728,7 @@ print_status(pe_working_set_t * data_set)
-                 } else {
-                     offline_nodes = add_list_element(offline_nodes, node_name);
-                 }
-+                free(node_name);
-                 continue;
-             }
-         }
-@@ -3078,6 +3080,7 @@ print_html_status(pe_working_set_t * data_set, const char \
                *filename)
-             fprintf(stream, "</ul>\n");
-         }
-         fprintf(stream, "</li>\n");
-+        free(node_name);
-     }
-     fprintf(stream, "</ul>\n");
-
-diff --git a/tools/crm_node.c b/tools/crm_node.c
-index c484e17..d0195e3 100644
---- a/tools/crm_node.c
-+++ b/tools/crm_node.c
-@@ -470,6 +470,7 @@ try_cman(int command, enum cluster_type_e stack)
-
-         case 'l':
-         case 'p':
-+            memset(cman_nodes, 0, MAX_NODES * sizeof(cman_node_t));
-             rc = cman_get_nodes(cman_handle, MAX_NODES, &node_count, cman_nodes);
-             if (rc != 0) {
-                 fprintf(stderr, "Couldn't query cman node list: %d %d", rc, errno);
-@@ -489,6 +490,7 @@ try_cman(int command, enum cluster_type_e stack)
-             break;
-
-         case 'i':
-+            memset(&node, 0, sizeof(cman_node_t));
-             rc = cman_get_node(cman_handle, CMAN_NODEID_US, &node);
-             if (rc != 0) {
-                 fprintf(stderr, "Couldn't query cman node id: %d %d", rc, errno);
-diff --git a/tools/crm_primitive.py.in b/tools/crm_primitive.py.in
-deleted file mode 100644
-index cfe0b5c..0000000
---- a/tools/crm_primitive.py.in
-+++ /dev/null
-@@ -1,268 +0,0 @@
--#!@PYTHON@
--
--'''Create an XML fragment describing a new resource
--'''
--
--__copyright__='''
--Author: Andrew Beekhof <andrew@beekhof.net>
--Copyright (C) 2005 Andrew Beekhof
--'''
--
--#
--# This program is free software; you can redistribute it and/or
--# modify it under the terms of the GNU General Public License
--# as published by the Free Software Foundation; either version 2
--# of the License, or (at your option) any later version.
--#
--# This program is distributed in the hope that it will be useful,
--# but WITHOUT ANY WARRANTY; without even the implied warranty of
--# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
--# GNU General Public License for more details.
--#
--# You should have received a copy of the GNU General Public License
--# along with this program; if not, write to the Free Software
--# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA  02111-1307, USA.
--
--import sys,string,os
--import xml.dom.minidom
--
--print_rsc_only = 0
--rsc_name = None
--rsc_class = None
--rsc_type = None
--rsc_provider = None
--start_timeout = None
--stop_timeout = None
--monitor_interval = None
--monitor_timeout = None
--rsc_options = []
--rsc_location = []
--rsc_colocation = []
--
--def create_cib() :
--	doc = xml.dom.minidom.Document()
--	cib = doc.createElement("cib")
--	doc.appendChild(cib)
--
--	configuration = doc.createElement("configuration")
--	cib.appendChild(configuration)
--
--	#crm_config = doc.createElement("crm_config")
--	#configuration.appendChild(crm_config)
--
--	resources = doc.createElement("resources")
--	configuration.appendChild(resources)
--	constraints = doc.createElement("constraints")
--	configuration.appendChild(constraints)
--
--	return doc, resources, constraints
--
--def cib_resource(doc, id, ra_class, type, provider):
--
--	params = None
--
--	resource = doc.createElement("primitive")
--
--	resource.setAttribute("id",    id)
--	resource.setAttribute("type",  type)
--	resource.setAttribute("class", ra_class)
--
--	if ra_class == "ocf":
--		if not provider:
--			provider = "heartbeat"
--		resource.setAttribute("provider", provider)
--
--	elif ra_class != "lsb" and ra_class != "heartbeat":
--		print "Unknown resource class: "+ ra_class
--		return None
--
--	operations = doc.createElement("operations")
--	resource.appendChild(operations)
--
--	if monitor_interval != None:
--	    op = doc.createElement("op")
--	    operations.appendChild(op)
--	    op.setAttribute("id",       id + "_mon_" + monitor_interval)
--	    op.setAttribute("name",     "monitor")
--	    op.setAttribute("interval", monitor_interval)
--	    if monitor_timeout != None:
--		op.setAttribute("timeout", monitor_timeout)
--
--	if start_timeout != None:
--	    op = doc.createElement("op")
--	    operations.appendChild(op)
--	    op.setAttribute("id",      id + "_start")
--	    op.setAttribute("name",    "start")
--	    op.setAttribute("timeout", start_timeout)
--
--	if stop_timeout != None:
--	    op = doc.createElement("op")
--	    operations.appendChild(op)
--	    op.setAttribute("id",      id + "_stop")
--	    op.setAttribute("name",    "stop")
--	    op.setAttribute("timeout", stop_timeout)
--
--	instance_attributes = doc.createElement("instance_attributes")
--	instance_attributes.setAttribute("id", id)
--	resource.appendChild(instance_attributes)
--	attributes = doc.createElement("attributes")
--	instance_attributes.appendChild(attributes)
--	for i in range(0,len(rsc_options)) :
--		if rsc_options[i] == None :
--			continue
--
--		param = string.split(rsc_options[i], "=")
--		nvpair = doc.createElement("nvpair")
--		nvpair.setAttribute("id",    id + "_" + param[0])
--		nvpair.setAttribute("name",  param[0])
--		nvpair.setAttribute("value", param[1])
--		attributes.appendChild(nvpair)
--
--	return resource
--
--def cib_rsc_location(doc, id, node, score):
--	rule = doc.createElement("rule")
--	rule.setAttribute("id", id+"_prefer_"+node+"_rule")
--	rule.setAttribute("score", score)
--	expression = doc.createElement("expression")
--	expression.setAttribute("id",id+"_prefer_"+node+"_expr")
--	expression.setAttribute("attribute","#uname")
--	expression.setAttribute("operation","eq")
--	expression.setAttribute("value", node)
--	rule.appendChild(expression)
--	return rule
--
--def cib_rsc_colocation(doc, id, other_resource, score):
--	rsc_colocation = doc.createElement("rsc_colocation")
--	rsc_colocation.setAttribute("id",  id+"_colocate_with_"+other_resource)
--	rsc_colocation.setAttribute("from", id)
--	rsc_colocation.setAttribute("to", other_resource)
--	rsc_colocation.setAttribute("score", score)
--	return rsc_colocation
--
--def print_usage():
--	print "usage: " \
--	    + sys.argv[0] \
--	    + " --name <string>"\
--	    + " --class <string>"\
--	    + " --type <string>"\
--	    + " [--provider <string>]"\
--	    + "\n\t"\
--	    + " [--start-timeout <interval>]"\
--	    + " [--stop-timeout <interval>]"\
--	    + " [--monitor <interval>]"\
--	    + " [--monitor-timeout <interval>]"\
--	    + "\n\t"\
--	    + " [--rsc-option name=value]*"\
--	    + " [--rsc-location uname=score]*"\
--	    + " [--rsc-colocation resource=score]*"
--	print "Example:\n\t" + sys.argv[0] \
--	    + " --name cluster_ip_1 --type IPaddr --provider heartbeat --class ocf "\
--	    + "--rsc-option ip2.168.1.101 --rsc-location node1P0 | cibadmin -C -p"
--	sys.exit(1)
--
--if __name__=="__main__" :
--
--	# Process arguments...
--	skipthis = None
--	args = sys.argv[1:]
--	if len(args) == 0:
--		print_usage()
--
--	for i in range(0, len(args)) :
--		if skipthis :
--			skipthis = None
--			continue
--
--		elif args[i] == "--name" :
--			skipthis = True
--			rsc_name = args[i+1]
--
--		elif args[i] == "--class" :
--			skipthis = True
--			rsc_class = args[i+1]
--
--		elif args[i] == "--type" :
--			skipthis = True
--			rsc_type = args[i+1]
--
--		elif args[i] == "--provider" :
--			skipthis = True
--			rsc_provider = args[i+1]
--
--		elif args[i] == "--start-timeout" :
--			skipthis = True
--			start_timeout = args[i+1]
--
--		elif args[i] == "--stop-timeout" :
--			skipthis = True
--			stop_timeout = args[i+1]
--
--		elif args[i] == "--monitor" :
--			skipthis = True
--			monitor_interval = args[i+1]
--
--		elif args[i] == "--monitor-timeout" :
--			skipthis = True
--			monitor_timeout = args[i+1]
--
--		elif args[i] == "--rsc-option" :
--			skipthis = True
--			params = string.split(args[i+1], "=")
--			if params[1] != None:
--				rsc_options.append(args[i+1])
--			else:
--				print "option '"+args[i+1]+"'  must be of the form name=value"
--
--		elif args[i] == "--rsc-location" :
--			skipthis = True
--			params = string.split(args[i+1], "=")
--			if params[1] != None:
--			    rsc_location.append(args[i+1])
--			else:
--			    print "option '"+args[i+1]+"'  must be of the form host=score"
--
--		elif args[i] == "--rsc-colocation" :
--			skipthis = True
--			params = string.split(args[i+1], "=")
--			if params[1] != None:
--				rsc_colocation.append(args[i+1])
--			else:
--				print "option '"+args[i+1]+"' must be of the form resource=score"
--
--		elif args[i] == "--rsc-only" :
--			print_rsc_only = 1
--		else:
--			print "Unknown argument: "+ args[i]
--			print_usage()
--
--	cib = create_cib()
--	pre_line = ""
--	id_index = 1
--	resource = cib_resource(cib[0], rsc_name, rsc_class, rsc_type, rsc_provider)
--
--	if print_rsc_only:
--		print resource.toprettyxml()
--		sys.exit(0)
--
--	cib[1].appendChild(resource)
--
--	if rsc_location != None :
--		rsc_loc = cib[0].createElement("rsc_location")
--		rsc_loc.setAttribute("id",  rsc_name+"_preferences")
--		rsc_loc.setAttribute("rsc", rsc_name)
--		for i in range(0, len(rsc_location)) :
--			param = string.split(rsc_location[i], "=")
--			location_rule = cib_rsc_location(cib[0], rsc_name, param[0], param[1])
--			rsc_loc.appendChild(location_rule)
--		cib[2].appendChild(rsc_loc)
--
--	for i in range(0, len(rsc_colocation)) :
--		if rsc_location[i] == None :
--			continue
--
--		param = string.split(rsc_colocation[i], "=")
--		colocation_rule = cib_rsc_colocation(cib[0], rsc_name, param[0], param[1])
--		cib[2].appendChild(colocation_rule)
--
--	print cib[0].toprettyxml()
-diff --git a/tools/crm_resource.c b/tools/crm_resource.c
-index 31136ef..2fce3b7 100644
---- a/tools/crm_resource.c
-+++ b/tools/crm_resource.c
-@@ -853,6 +853,7 @@ main(int argc, char **argv)
-             rc = -ENXIO;
-             goto bail;
-         }
-+
-         rc = cli_resource_print_attribute(rsc_id, prop_name, &data_set);
-
-     } else if (rsc_cmd == 'p') {
-@@ -883,6 +884,10 @@ main(int argc, char **argv)
-     } else if (rsc_cmd == 'C' && rsc_id) {
-         resource_t *rsc = pe_find_resource(data_set.resources, rsc_id);
-
-+        if(do_force == FALSE) {
-+            rsc = uber_parent(rsc);
-+        }
-+
-         crm_debug("Re-checking the state of %s on %s", rsc_id, host_uname);
-         if(rsc) {
-             crmd_replies_needed = 0;
-@@ -891,6 +896,11 @@ main(int argc, char **argv)
-             rc = -ENODEV;
-         }
-
-+        if(rc == pcmk_ok && BE_QUIET == FALSE) {
-+            /* Now check XML_RSC_ATTR_TARGET_ROLE and XML_RSC_ATTR_MANAGED */
-+            cli_resource_check(cib_conn, rsc);
-+        }
-+
-         if (rc == pcmk_ok) {
-             start_mainloop();
-         }
-diff --git a/tools/crm_resource.h b/tools/crm_resource.h
-index 49b6138..5a206e0 100644
---- a/tools/crm_resource.h
-+++ b/tools/crm_resource.h
-@@ -68,6 +68,7 @@ int cli_resource_print_property(const char *rsc, const char *attr, \
                pe_working_se
- int cli_resource_print_operations(const char *rsc_id, const char *host_uname, bool \
                active, pe_working_set_t * data_set);
-
- /* runtime */
-+void cli_resource_check(cib_t * cib, resource_t *rsc);
- int cli_resource_fail(crm_ipc_t * crmd_channel, const char *host_uname, const char \
                *rsc_id, pe_working_set_t * data_set);
- int cli_resource_search(const char *rsc, pe_working_set_t * data_set);
- int cli_resource_delete(cib_t *cib_conn, crm_ipc_t * crmd_channel, const char \
                *host_uname, resource_t * rsc, pe_working_set_t * data_set);
-diff --git a/tools/crm_resource_print.c b/tools/crm_resource_print.c
-index 9c3711c..946b9e3 100644
---- a/tools/crm_resource_print.c
-+++ b/tools/crm_resource_print.c
-@@ -352,8 +352,11 @@ cli_resource_print_attribute(const char *rsc, const char *attr, \
                pe_working_set_t
-
-     if (safe_str_eq(attr_set_type, XML_TAG_ATTR_SETS)) {
-         get_rsc_attributes(params, the_rsc, current, data_set);
-+
-     } else if (safe_str_eq(attr_set_type, XML_TAG_META_SETS)) {
-+        /* No need to redirect to the parent */
-         get_meta_attributes(params, the_rsc, current, data_set);
-+
-     } else {
-         unpack_instance_attributes(data_set->input, the_rsc->xml, \
                XML_TAG_UTILIZATION, NULL,
-                                    params, NULL, FALSE, data_set->now);
-diff --git a/tools/crm_resource_runtime.c b/tools/crm_resource_runtime.c
-index 006ec08..a270cbf 100644
---- a/tools/crm_resource_runtime.c
-+++ b/tools/crm_resource_runtime.c
-@@ -198,6 +198,7 @@ cli_resource_update_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-     int rc = pcmk_ok;
-     static bool need_init = TRUE;
-
-+    char *lookup_id = NULL;
-     char *local_attr_id = NULL;
-     char *local_attr_set = NULL;
-
-@@ -212,14 +213,39 @@ cli_resource_update_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-     }
-
-     if (safe_str_eq(attr_set_type, XML_TAG_ATTR_SETS)) {
--        rc = find_resource_attr(cib, XML_ATTR_ID, rsc_id, XML_TAG_META_SETS, \
                attr_set, attr_id,
-+        rc = find_resource_attr(cib, XML_ATTR_ID, uber_parent(rsc)->id, \
                XML_TAG_META_SETS, attr_set, attr_id,
-                                 attr_name, &local_attr_id);
--        if (rc == pcmk_ok) {
--            printf("WARNING: There is already a meta attribute called %s \
                (id=%s)\n", attr_name,
--                   local_attr_id);
-+        if(rc == pcmk_ok && do_force == FALSE) {
-+            if (BE_QUIET == FALSE) {
-+                printf("WARNING: There is already a meta attribute for '%s' called \
                '%s' (id=%s)\n",
-+                       uber_parent(rsc)->id, attr_name, local_attr_id);
-+                printf("         Delete '%s' first or use --force to override\n", \
                local_attr_id);
-+            }
-+            return -ENOTUNIQ;
-+        }
-+
-+    } else if(rsc->parent) {
-+
-+        switch(rsc->parent->variant) {
-+            case pe_group:
-+                if (BE_QUIET == FALSE) {
-+                    printf("Updating '%s' for '%s' will not apply to its peers in \
                '%s'\n", attr_name, rsc_id, rsc->parent->id);
-+                }
-+                break;
-+            case pe_master:
-+            case pe_clone:
-+                rsc = rsc->parent;
-+                if (BE_QUIET == FALSE) {
-+                    printf("Updating '%s' for '%s'...\n", rsc->id, rsc_id);
-+                }
-+                break;
-+            default:
-+                break;
-         }
-     }
--    rc = find_resource_attr(cib, XML_ATTR_ID, rsc_id, attr_set_type, attr_set, \
                attr_id, attr_name,
-+
-+    lookup_id = clone_strip(rsc->id); /* Could be a cloned group! */
-+    rc = find_resource_attr(cib, XML_ATTR_ID, lookup_id, attr_set_type, attr_set, \
                attr_id, attr_name,
-                             &local_attr_id);
-
-     if (rc == pcmk_ok) {
-@@ -227,6 +253,7 @@ cli_resource_update_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-         attr_id = local_attr_id;
-
-     } else if (rc != -ENXIO) {
-+        free(lookup_id);
-         free(local_attr_id);
-         return rc;
-
-@@ -250,7 +277,7 @@ cli_resource_update_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-         free_xml(cib_top);
-
-         if (attr_set == NULL) {
--            local_attr_set = crm_concat(rsc_id, attr_set_type, '-');
-+            local_attr_set = crm_concat(lookup_id, attr_set_type, '-');
-             attr_set = local_attr_set;
-         }
-         if (attr_id == NULL) {
-@@ -263,7 +290,7 @@ cli_resource_update_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-         }
-
-         xml_top = create_xml_node(NULL, tag);
--        crm_xml_add(xml_top, XML_ATTR_ID, rsc_id);
-+        crm_xml_add(xml_top, XML_ATTR_ID, lookup_id);
-
-         xml_obj = create_xml_node(xml_top, attr_set_type);
-         crm_xml_add(xml_obj, XML_ATTR_ID, attr_set);
-@@ -285,7 +312,15 @@ cli_resource_update_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-     crm_log_xml_debug(xml_top, "Update");
-
-     rc = cib->cmds->modify(cib, XML_CIB_TAG_RESOURCES, xml_top, cib_options);
-+    if (rc == pcmk_ok && BE_QUIET == FALSE) {
-+        printf("Set '%s' option: id=%s%s%s%s%s=%s\n", lookup_id, local_attr_id,
-+               attr_set ? " set=" : "", attr_set ? attr_set : "",
-+               attr_name ? " name=" : "", attr_name ? attr_name : "", attr_value);
-+    }
-+
-     free_xml(xml_top);
-+
-+    free(lookup_id);
-     free(local_attr_id);
-     free(local_attr_set);
-
-@@ -330,6 +365,7 @@ cli_resource_delete_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-     xmlNode *xml_obj = NULL;
-
-     int rc = pcmk_ok;
-+    char *lookup_id = NULL;
-     char *local_attr_id = NULL;
-     resource_t *rsc = find_rsc_or_clone(rsc_id, data_set);
-
-@@ -337,7 +373,29 @@ cli_resource_delete_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-         return -ENXIO;
-     }
-
--    rc = find_resource_attr(cib, XML_ATTR_ID, rsc_id, attr_set_type, attr_set, \
                attr_id, attr_name,
-+    if(rsc->parent && safe_str_eq(attr_set_type, XML_TAG_META_SETS)) {
-+
-+        switch(rsc->parent->variant) {
-+            case pe_group:
-+                if (BE_QUIET == FALSE) {
-+                    printf("Removing '%s' for '%s' will not apply to its peers in \
                '%s'\n", attr_name, rsc_id, rsc->parent->id);
-+                }
-+                break;
-+            case pe_master:
-+            case pe_clone:
-+                rsc = rsc->parent;
-+                if (BE_QUIET == FALSE) {
-+                    printf("Removing '%s' from '%s' for '%s'...\n", attr_name, \
                rsc->id, rsc_id);
-+                }
-+                break;
-+            default:
-+                break;
-+        }
-+
-+    }
-+
-+    lookup_id = clone_strip(rsc->id);
-+    rc = find_resource_attr(cib, XML_ATTR_ID, lookup_id, attr_set_type, attr_set, \
                attr_id, attr_name,
-                             &local_attr_id);
-
-     if (rc == -ENXIO) {
-@@ -360,8 +418,8 @@ cli_resource_delete_attribute(const char *rsc_id, const char \
                *attr_set, const ch
-     CRM_ASSERT(cib);
-     rc = cib->cmds->delete(cib, XML_CIB_TAG_RESOURCES, xml_obj, cib_options);
-
--    if (rc == pcmk_ok) {
--        printf("Deleted %s option: id=%s%s%s%s%s\n", rsc_id, local_attr_id,
-+    if (rc == pcmk_ok && BE_QUIET == FALSE) {
-+        printf("Deleted '%s' option: id=%s%s%s%s%s\n", lookup_id, local_attr_id,
-                attr_set ? " set=" : "", attr_set ? attr_set : "",
-                attr_name ? " name=" : "", attr_name ? attr_name : "");
-     }
-@@ -493,7 +551,10 @@ cli_resource_delete(cib_t *cib_conn, crm_ipc_t * crmd_channel, \
                const char *host_
-         for (lpc = rsc->children; lpc != NULL; lpc = lpc->next) {
-             resource_t *child = (resource_t *) lpc->data;
-
--            cli_resource_delete(cib_conn, crmd_channel, host_uname, child, \
                data_set);
-+            rc = cli_resource_delete(cib_conn, crmd_channel, host_uname, child, \
                data_set);
-+            if(rc != pcmk_ok || is_not_set(rsc->flags, pe_rsc_unique)) {
-+                return rc;
-+            }
-         }
-         return pcmk_ok;
-
-@@ -514,31 +575,78 @@ cli_resource_delete(cib_t *cib_conn, crm_ipc_t * crmd_channel, \
                const char *host_
-     node = pe_find_node(data_set->nodes, host_uname);
-
-     if (node && node->details->rsc_discovery_enabled) {
--        printf("Cleaning up %s on %s\n", rsc->id, host_uname);
-+        printf("Cleaning up %s on %s", rsc->id, host_uname);
-         rc = send_lrm_rsc_op(crmd_channel, CRM_OP_LRM_DELETE, host_uname, rsc->id, \
                TRUE, data_set);
-     } else {
-         printf("Resource discovery disabled on %s. Unable to delete lrm state.\n", \
                host_uname);
-+        rc = -EOPNOTSUPP;
-     }
-
-     if (rc == pcmk_ok) {
-         char *attr_name = NULL;
--        const char *id = rsc->id;
-
-         if(node && node->details->remote_rsc == NULL && \
                node->details->rsc_discovery_enabled) {
-             crmd_replies_needed++;
-         }
--        if (rsc->clone_name) {
--            id = rsc->clone_name;
-+
-+        if(is_not_set(rsc->flags, pe_rsc_unique)) {
-+            char *id = clone_strip(rsc->id);
-+            attr_name = crm_strdup_printf("fail-count-%s", id);
-+            free(id);
-+
-+        } else if (rsc->clone_name) {
-+            attr_name = crm_strdup_printf("fail-count-%s", rsc->clone_name);
-+
-+        } else {
-+            attr_name = crm_strdup_printf("fail-count-%s", rsc->id);
-         }
-
--        attr_name = crm_concat("fail-count", id, '-');
-+        printf(", removing %s\n", attr_name);
-         rc = attrd_update_delegate(NULL, 'D', host_uname, attr_name, NULL, \
                XML_CIB_TAG_STATUS, NULL,
-                               NULL, NULL, node ? is_remote_node(node) : FALSE);
-         free(attr_name);
-+
-+    } else if(rc != -EOPNOTSUPP) {
-+        printf(" - FAILED\n");
-     }
-+
-     return rc;
- }
-
-+void
-+cli_resource_check(cib_t * cib_conn, resource_t *rsc)
-+{
-+
-+    char *role_s = NULL;
-+    char *managed = NULL;
-+    resource_t *parent = uber_parent(rsc);
-+
-+    find_resource_attr(cib_conn, XML_ATTR_ID, parent->id,
-+                       XML_TAG_META_SETS, NULL, NULL, XML_RSC_ATTR_MANAGED, \
                &managed);
-+
-+    find_resource_attr(cib_conn, XML_ATTR_ID, parent->id,
-+                       XML_TAG_META_SETS, NULL, NULL, XML_RSC_ATTR_TARGET_ROLE, \
                &role_s);
-+
-+    if(managed == NULL) {
-+        managed = strdup("1");
-+    }
-+    if(crm_is_true(managed) == FALSE) {
-+        printf("\n\t*Resource %s is configured to not be managed by the cluster\n", \
                parent->id);
-+    }
-+    if(role_s) {
-+        enum rsc_role_e role = text2role(role_s);
-+        if(role == RSC_ROLE_UNKNOWN) {
-+            // Treated as if unset
-+
-+        } else if(role == RSC_ROLE_STOPPED) {
-+            printf("\n\t* The configuration specifies that '%s' should remain \
                stopped\n", parent->id);
-+
-+        } else if(parent->variant > pe_clone && role != RSC_ROLE_MASTER) {
-+            printf("\n\t* The configuration specifies that '%s' should not be \
                promoted\n", parent->id);
-+        }
-+    }
-+}
-+
- int
- cli_resource_fail(crm_ipc_t * crmd_channel, const char *host_uname,
-              const char *rsc_id, pe_working_set_t * data_set)
-diff --git a/tools/crm_simulate.c b/tools/crm_simulate.c
-index 0051112..7d0a8eb 100644
---- a/tools/crm_simulate.c
-+++ b/tools/crm_simulate.c
-@@ -59,8 +59,11 @@ char *use_date = NULL;
- static void
- get_date(pe_working_set_t * data_set)
- {
-+    int value = 0;
-     time_t original_date = 0;
--    crm_element_value_int(data_set->input, "execution-date", (int*)&original_date);
-+
-+    crm_element_value_int(data_set->input, "execution-date", &value);
-+    original_date = value;
-
-     if (use_date) {
-         data_set->now = crm_time_new(use_date);
-diff --git a/tools/crm_utils.py.in b/tools/crm_utils.py.in
-deleted file mode 100644
-index 67d6918..0000000
---- a/tools/crm_utils.py.in
-+++ /dev/null
-@@ -1,188 +0,0 @@
--#!/bin/env python
--#
--#
--#	pingd OCF Resource Agent
--#	Records (in the CIB) the current number of ping nodes a
--#	   cluster node can connect to.
--#
--# Copyright (c) 2006 Andrew Beekhof
--#                    All Rights Reserved.
--#
--# This program is free software; you can redistribute it and/or modify
--# it under the terms of version 2 of the GNU General Public License as
--# published by the Free Software Foundation.
--#
--# This program is distributed in the hope that it would be useful, but
--# WITHOUT ANY WARRANTY; without even the implied warranty of
--# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
--#
--# Further, this software is distributed without any warranty that it is
--# free of the rightful claim of any third person regarding infringement
--# or the like.  Any license provided herein, whether implied or
--# otherwise, applies only to this software file.  Patent licenses, if
--# any, provided herein do not apply to combinations of this program with
--# other software, or any other product whatsoever.
--#
--# You should have received a copy of the GNU General Public License
--# along with this program; if not, write the Free Software Foundation,
--# Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
--#
--#######################################################################
--
--import os
--import sys
--import getopt
--import readline
--import traceback
--from popen2 import Popen3
--
--crm_topic = "crm"
--topic_stack = [ crm_topic ]
--hist_file  = os.environ.get('HOME')+"/.crm_history"
--global_opts = {}
--
--def exit_(code=0):
--    if global_opts["interactive"]:
--	log_info("Exiting... ")
--    try:
--	readline.write_history_file(hist_file)
--	log_debug("Wrote history to: "+hist_file)
--    except:
--	log_debug("Couldnt write history to: "+hist_file)
--    sys.exit(code)
--
--def log_debug(log):
--    if global_opts.has_key("debug") and global_opts["debug"]:
--	print log
--
--def log_dev(log):
--    if global_opts.has_key("devlog") and global_opts["devlog"]:
--	print log
--
--def log_info(log):
--    print log
--
--def log_err(log):
--    print "ERROR: "+log
--
--def set_topic(name):
--    global crm_topic
--    if crm_topic != name:
--    	log_dev("topic: %s->%s" % (crm_topic, name))
--    crm_topic = name
--
--def os_system(cmd, print_rawúlse):
--    log_debug("Performing command: "+cmd)
--    p = Popen3(cmd, None)
--    p.tochild.close()
--    result = p.fromchild.readlines()
--    p.fromchild.close()
--    p.wait()
--    if print_raw:
--	for line in result:
--	    print line.rstrip()
--    return result
--
--#
--#  Creates an argv-style array (that preserves quoting) for use in shell-mode
--#
--def create_argv(text):
--    args = []
--    word = []
--    index = 0
--    total = len(text)
--
--    in_word = False
--    in_verbatum = False
--
--    while index < total:
--	finish_word = False
--	append_word = False
--	#log_debug("processing: "+text[index])
--	if text[index] == '\\':
--	    index = index +1
--	    append_word = True
--
--	elif text[index].isspace():
--	    if in_verbatum or in_word:
--		append_word = True
--	    else:
--		finish_word = True
--
--	elif text[index] == '"':
--	    if in_verbatum:
--		append_word = True
--	    else:
--		finish_word = True
--		if in_word:
--		    in_word = False
--		else:
--		    in_word = True
--
--	elif text[index] == '\'':
--	    finish_word = True
--	    if in_verbatum:
--		in_verbatum = False
--	    else:
--		in_verbatum = True
--	else:
--	    append_word = True
--
--	if finish_word:
--	    if word:
--		args.append(''.join(word))
--		word = []
--	elif append_word:
--	    word.append(text[index])
--	    #log_debug("Added %s to word: %s" % (text[index], str(word)))
--
--	index = index +1
--
--    if in_verbatum or in_word:
--	text=""
--	if word:
--	    text=" after: '%s'"%''.join(word)
--	    raise QuotingError("Un-matched quoting%s"%text, args)
--
--    elif word:
--	args.append(''.join(word))
--
--    return args
--
--def init_readline(func):
--    readline.set_completer(func)
--    readline.parse_and_bind("tab: complete")
--    readline.set_history_length(100)
--
--    try:
--	readline.read_history_file(hist_file)
--    except:
--	pass
--
--def fancyopts(args, options, state):
--    long = []
--    short = ''
--    map = {}
--    dt = {}
--
--    for s, l, d, c in options:
--        pl = l.replace('-', '_')
--        map['-'+s] = map['--'+l] = pl
--        state[pl] = d
--        dt[pl] = type(d)
--        if not d is None and not callable(d):
--            if s: s += ':'
--            if l: l += '='
--        if s: short = short + s
--        if l: long.append(l)
--
--    opts, args = getopt.getopt(args, short, long)
--
--    for opt, arg in opts:
--        if dt[map[opt]] is type(fancyopts): state[map[opt]](state,map[opt],arg)
--        elif dt[map[opt]] is type(1): state[map[opt]] = int(arg)
--        elif dt[map[opt]] is type(''): state[map[opt]] = arg
--        elif dt[map[opt]] is type([]): state[map[opt]].append(arg)
--        elif dt[map[opt]] is type(None): state[map[opt]] = 1
--
--    return args
-diff --git a/tools/regression.acls.exp b/tools/regression.acls.exp
-index ae6735a..ac7ae0c 100644
---- a/tools/regression.acls.exp
-+++ b/tools/regression.acls.exp
-@@ -253,10 +253,10 @@ Error performing operation: Permission denied
- =#=#=#= End test: unknownguy: Set stonith-enabled - Permission denied (13) =#=#=#- \
                * Passed: crm_attribute  - unknownguy: Set stonith-enabled
- =#=#=#= Begin test: unknownguy: Create a resource =#=#=#--__xml_acl_check: \
                Ordinary user unknownguy cannot access the CIB without any defined \
                ACLs
--__xml_acl_check: 	Ordinary user unknownguy cannot access the CIB without any \
                defined ACLs
--__xml_acl_check: 	Ordinary user unknownguy cannot access the CIB without any \
                defined ACLs
--__xml_acl_check: 	Ordinary user unknownguy cannot access the CIB without any \
                defined ACLs
-+__xml_acl_check:	Ordinary user unknownguy cannot access the CIB without any defined \
                ACLs
-+__xml_acl_check:	Ordinary user unknownguy cannot access the CIB without any defined \
                ACLs
-+__xml_acl_check:	Ordinary user unknownguy cannot access the CIB without any defined \
                ACLs
-+__xml_acl_check:	Ordinary user unknownguy cannot access the CIB without any defined \
                ACLs
- Call failed: Permission denied
- =#=#=#= End test: unknownguy: Create a resource - Permission denied (13) =#=#=#- * \
                Passed: cibadmin       - unknownguy: Create a resource
-@@ -273,8 +273,8 @@ Error performing operation: Permission denied
- =#=#=#= End test: l33t-haxor: Set stonith-enabled - Permission denied (13) =#=#=#- \
                * Passed: crm_attribute  - l33t-haxor: Set stonith-enabled
- =#=#=#= Begin test: l33t-haxor: Create a resource =#=#=#--__xml_acl_check: 	400 \
                access denied to /cib/configuration/resources/primitive[@id='dummy']: \
                parent
--__xml_acl_post_process: 	Cannot add new node primitive at \
                /cib/configuration/resources/primitive[@id='dummy']
-+__xml_acl_check:	400 access denied to \
                /cib/configuration/resources/primitive[@id='dummy']: parent
-+__xml_acl_post_process:	Cannot add new node primitive at \
                /cib/configuration/resources/primitive[@id='dummy']
- Call failed: Permission denied
- =#=#=#= End test: l33t-haxor: Create a resource - Permission denied (13) =#=#=#- * \
                Passed: cibadmin       - l33t-haxor: Create a resource
-@@ -323,13 +323,13 @@ Call failed: Permission denied
- =#=#=#= End test: niceguy: Query configuration - OK (0) =#=#=#- * Passed: cibadmin  \
                - niceguy: Query configuration
- =#=#=#= Begin test: niceguy: Set enable-acl =#=#=#--__xml_acl_check: 	400 access \
denied to /cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-enable-acl'][@value]: \
                default
-+__xml_acl_check:	400 access denied to \
/cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-enable-acl'][@value]: \
                default
- Error performing operation: Permission denied
- Error setting enable-aclúlse (section=crm_config, set=<null>): Permission denied
- =#=#=#= End test: niceguy: Set enable-acl - Permission denied (13) =#=#=#- * \
                Passed: crm_attribute  - niceguy: Set enable-acl
- =#=#=#= Begin test: niceguy: Set stonith-enabled =#=#=#--__xml_acl_post_process: \
                Creation of nvpair=cib-bootstrap-options-stonith-enabled is allowed
-+__xml_acl_post_process:	Creation of nvpair=cib-bootstrap-options-stonith-enabled is \
                allowed
- =#=#=#= Current cib after: niceguy: Set stonith-enabled =#=#=#- <cib epoch="7" \
                num_updates="0" admin_epoch="0">
-   <configuration>
-@@ -376,8 +376,8 @@ __xml_acl_post_process: 	Creation of \
                nvpair=cib-bootstrap-options-stonith-enable
- =#=#=#= End test: niceguy: Set stonith-enabled - OK (0) =#=#=#- * Passed: \
                crm_attribute  - niceguy: Set stonith-enabled
- =#=#=#= Begin test: niceguy: Create a resource =#=#=#--__xml_acl_check: 	400 access \
                denied to /cib/configuration/resources/primitive[@id='dummy']: \
                default
--__xml_acl_post_process: 	Cannot add new node primitive at \
                /cib/configuration/resources/primitive[@id='dummy']
-+__xml_acl_check:	400 access denied to \
                /cib/configuration/resources/primitive[@id='dummy']: default
-+__xml_acl_post_process:	Cannot add new node primitive at \
                /cib/configuration/resources/primitive[@id='dummy']
- Call failed: Permission denied
- =#=#=#= End test: niceguy: Create a resource - Permission denied (13) =#=#=#- * \
                Passed: cibadmin       - niceguy: Create a resource
-@@ -533,10 +533,11 @@ Error performing operation: Permission denied
- =#=#=#= End test: l33t-haxor: Remove a resource meta attribute - Permission denied \
                (13) =#=#=#- * Passed: crm_resource   - l33t-haxor: Remove a resource \
                meta attribute
- =#=#=#= Begin test: niceguy: Create a resource meta attribute =#=#=#--error: \
unpack_resources: 	Resource start-up disabled since no STONITH resources have been \
                defined
--error: unpack_resources: 	Either configure some or disable STONITH with the \
                stonith-enabled option
--error: unpack_resources: 	NOTE: Clusters with shared data need STONITH to ensure \
                data integrity
--__xml_acl_post_process: 	Creation of nvpair=dummy-meta_attributes-target-role is \
                allowed
-+error: unpack_resources:	Resource start-up disabled since no STONITH resources have \
                been defined
-+error: unpack_resources:	Either configure some or disable STONITH with the \
                stonith-enabled option
-+error: unpack_resources:	NOTE: Clusters with shared data need STONITH to ensure \
                data integrity
-+__xml_acl_post_process:	Creation of nvpair=dummy-meta_attributes-target-role is \
                allowed
-+Set 'dummy' option: id=dummy-meta_attributes-target-role set=dummy-meta_attributes \
                name=target-role=Stopped
- =#=#=#= Current cib after: niceguy: Create a resource meta attribute =#=#=#- <cib \
                epoch="10" num_updates="0" admin_epoch="0">
-   <configuration>
-@@ -589,9 +590,9 @@ __xml_acl_post_process: 	Creation of \
                nvpair=dummy-meta_attributes-target-role is
- =#=#=#= End test: niceguy: Create a resource meta attribute - OK (0) =#=#=#- * \
                Passed: crm_resource   - niceguy: Create a resource meta attribute
- =#=#=#= Begin test: niceguy: Query a resource meta attribute =#=#=#--error: \
unpack_resources: 	Resource start-up disabled since no STONITH resources have been \
                defined
--error: unpack_resources: 	Either configure some or disable STONITH with the \
                stonith-enabled option
--error: unpack_resources: 	NOTE: Clusters with shared data need STONITH to ensure \
                data integrity
-+error: unpack_resources:	Resource start-up disabled since no STONITH resources have \
                been defined
-+error: unpack_resources:	Either configure some or disable STONITH with the \
                stonith-enabled option
-+error: unpack_resources:	NOTE: Clusters with shared data need STONITH to ensure \
                data integrity
- Stopped
- =#=#=#= Current cib after: niceguy: Query a resource meta attribute =#=#=#- <cib \
                epoch="10" num_updates="0" admin_epoch="0">
-@@ -645,10 +646,10 @@ Stopped
- =#=#=#= End test: niceguy: Query a resource meta attribute - OK (0) =#=#=#- * \
                Passed: crm_resource   - niceguy: Query a resource meta attribute
- =#=#=#= Begin test: niceguy: Remove a resource meta attribute =#=#=#--error: \
unpack_resources: 	Resource start-up disabled since no STONITH resources have been \
                defined
--error: unpack_resources: 	Either configure some or disable STONITH with the \
                stonith-enabled option
--error: unpack_resources: 	NOTE: Clusters with shared data need STONITH to ensure \
                data integrity
--Deleted dummy option: id=dummy-meta_attributes-target-role name=target-role
-+error: unpack_resources:	Resource start-up disabled since no STONITH resources have \
                been defined
-+error: unpack_resources:	Either configure some or disable STONITH with the \
                stonith-enabled option
-+error: unpack_resources:	NOTE: Clusters with shared data need STONITH to ensure \
                data integrity
-+Deleted 'dummy' option: id=dummy-meta_attributes-target-role name=target-role
- =#=#=#= Current cib after: niceguy: Remove a resource meta attribute =#=#=#- <cib \
                epoch="11" num_updates="0" admin_epoch="0">
-   <configuration>
-@@ -699,10 +700,11 @@ Deleted dummy option: id=dummy-meta_attributes-target-role \
                name=target-role
- =#=#=#= End test: niceguy: Remove a resource meta attribute - OK (0) =#=#=#- * \
                Passed: crm_resource   - niceguy: Remove a resource meta attribute
- =#=#=#= Begin test: niceguy: Create a resource meta attribute =#=#=#--error: \
unpack_resources: 	Resource start-up disabled since no STONITH resources have been \
                defined
--error: unpack_resources: 	Either configure some or disable STONITH with the \
                stonith-enabled option
--error: unpack_resources: 	NOTE: Clusters with shared data need STONITH to ensure \
                data integrity
--__xml_acl_post_process: 	Creation of nvpair=dummy-meta_attributes-target-role is \
                allowed
-+error: unpack_resources:	Resource start-up disabled since no STONITH resources have \
                been defined
-+error: unpack_resources:	Either configure some or disable STONITH with the \
                stonith-enabled option
-+error: unpack_resources:	NOTE: Clusters with shared data need STONITH to ensure \
                data integrity
-+__xml_acl_post_process:	Creation of nvpair=dummy-meta_attributes-target-role is \
                allowed
-+Set 'dummy' option: id=dummy-meta_attributes-target-role set=dummy-meta_attributes \
                name=target-role=Started
- =#=#=#= Current cib after: niceguy: Create a resource meta attribute =#=#=#- <cib \
                epoch="12" num_updates="0" admin_epoch="0">
-   <configuration>
-@@ -804,8 +806,8 @@ __xml_acl_post_process: 	Creation of \
                nvpair=dummy-meta_attributes-target-role is
-   <status/>
- </cib>
- =#=#=#= Begin test: niceguy: Replace - remove acls =#=#=#--__xml_acl_check: 	400 \
                access denied to /cib[@epoch]: default
--__xml_acl_check: 	400 access denied to /cib/configuration/acls: default
-+__xml_acl_check:	400 access denied to /cib[@epoch]: default
-+__xml_acl_check:	400 access denied to /cib/configuration/acls: default
- Call failed: Permission denied
- =#=#=#= End test: niceguy: Replace - remove acls - Permission denied (13) =#=#=#- * \
                Passed: cibadmin       - niceguy: Replace - remove acls
-@@ -859,9 +861,9 @@ Call failed: Permission denied
-   <status/>
- </cib>
- =#=#=#= Begin test: niceguy: Replace - create resource =#=#=#--__xml_acl_check: \
                400 access denied to /cib[@epoch]: default
--__xml_acl_check: 	400 access denied to \
                /cib/configuration/resources/primitive[@id='dummy2']: default
--__xml_acl_post_process: 	Cannot add new node primitive at \
                /cib/configuration/resources/primitive[@id='dummy2']
-+__xml_acl_check:	400 access denied to /cib[@epoch]: default
-+__xml_acl_check:	400 access denied to \
                /cib/configuration/resources/primitive[@id='dummy2']: default
-+__xml_acl_post_process:	Cannot add new node primitive at \
                /cib/configuration/resources/primitive[@id='dummy2']
- Call failed: Permission denied
- =#=#=#= End test: niceguy: Replace - create resource - Permission denied (13) \
                =#=#=#- * Passed: cibadmin       - niceguy: Replace - create resource
-@@ -914,8 +916,8 @@ Call failed: Permission denied
-   <status/>
- </cib>
- =#=#=#= Begin test: niceguy: Replace - modify attribute (deny) \
                =#=#=#--__xml_acl_check: 	400 access denied to /cib[@epoch]: default
--__xml_acl_check: 	400 access denied to \
/cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-enable-acl'][@value]: \
                default
-+__xml_acl_check:	400 access denied to /cib[@epoch]: default
-+__xml_acl_check:	400 access denied to \
/cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-enable-acl'][@value]: \
                default
- Call failed: Permission denied
- =#=#=#= End test: niceguy: Replace - modify attribute (deny) - Permission denied \
                (13) =#=#=#- * Passed: cibadmin       - niceguy: Replace - modify \
                attribute (deny)
-@@ -968,8 +970,8 @@ Call failed: Permission denied
-   <status/>
- </cib>
- =#=#=#= Begin test: niceguy: Replace - delete attribute (deny) \
                =#=#=#--__xml_acl_check: 	400 access denied to /cib[@epoch]: default
--__xml_acl_check: 	400 access denied to \
/cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-enable-acl']: \
                default
-+__xml_acl_check:	400 access denied to /cib[@epoch]: default
-+__xml_acl_check:	400 access denied to \
/cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-enable-acl']: \
                default
- Call failed: Permission denied
- =#=#=#= End test: niceguy: Replace - delete attribute (deny) - Permission denied \
                (13) =#=#=#- * Passed: cibadmin       - niceguy: Replace - delete \
                attribute (deny)
-@@ -1022,8 +1024,8 @@ Call failed: Permission denied
-   <status/>
- </cib>
- =#=#=#= Begin test: niceguy: Replace - create attribute (deny) \
                =#=#=#--__xml_acl_check: 	400 access denied to /cib[@epoch]: default
--__xml_acl_check: 	400 access denied to \
                /cib/configuration/resources/primitive[@id='dummy'][@description]: \
                default
-+__xml_acl_check:	400 access denied to /cib[@epoch]: default
-+__xml_acl_check:	400 access denied to \
                /cib/configuration/resources/primitive[@id='dummy'][@description]: \
                default
- Call failed: Permission denied
- =#=#=#= End test: niceguy: Replace - create attribute (deny) - Permission denied \
                (13) =#=#=#- * Passed: cibadmin       - niceguy: Replace - create \
                attribute (deny)
-@@ -1180,28 +1182,28 @@ Call failed: Permission denied
-
-     !#!#!#!#! Upgrading to pacemaker-2.0 and retesting !#!#!#!#!
- =#=#=#= Begin test: root: Upgrade to pacemaker-2.0 =#=#=#--__xml_acl_post_process: \
                Creation of acl_permission=observer-read-1 is allowed
--__xml_acl_post_process: 	Creation of acl_permission=observer-write-1 is allowed
--__xml_acl_post_process: 	Creation of acl_permission=observer-write-2 is allowed
--__xml_acl_post_process: 	Creation of acl_permission­min-read-1 is allowed
--__xml_acl_post_process: 	Creation of acl_permission­min-write-1 is allowed
--__xml_acl_post_process: 	Creation of acl_target=l33t-haxor is allowed
--__xml_acl_post_process: 	Creation of role=auto-l33t-haxor is allowed
--__xml_acl_post_process: 	Creation of acl_role=auto-l33t-haxor is allowed
--__xml_acl_post_process: 	Creation of acl_permission=crook-nothing is allowed
--__xml_acl_post_process: 	Creation of acl_target=niceguy is allowed
--__xml_acl_post_process: 	Creation of role=observer is allowed
--__xml_acl_post_process: 	Creation of acl_target=bob is allowed
--__xml_acl_post_process: 	Creation of role­min is allowed
--__xml_acl_post_process: 	Creation of acl_targetºdidea is allowed
--__xml_acl_post_process: 	Creation of role=auto-badidea is allowed
--__xml_acl_post_process: 	Creation of acl_role=auto-badidea is allowed
--__xml_acl_post_process: 	Creation of acl_permissionºdidea-resources is allowed
--__xml_acl_post_process: 	Creation of acl_target¾tteridea is allowed
--__xml_acl_post_process: 	Creation of role=auto-betteridea is allowed
--__xml_acl_post_process: 	Creation of acl_role=auto-betteridea is allowed
--__xml_acl_post_process: 	Creation of acl_permission¾tteridea-nothing is allowed
--__xml_acl_post_process: 	Creation of acl_permission¾tteridea-resources is allowed
-+__xml_acl_post_process:	Creation of acl_permission=observer-read-1 is allowed
-+__xml_acl_post_process:	Creation of acl_permission=observer-write-1 is allowed
-+__xml_acl_post_process:	Creation of acl_permission=observer-write-2 is allowed
-+__xml_acl_post_process:	Creation of acl_permission­min-read-1 is allowed
-+__xml_acl_post_process:	Creation of acl_permission­min-write-1 is allowed
-+__xml_acl_post_process:	Creation of acl_target=l33t-haxor is allowed
-+__xml_acl_post_process:	Creation of role=auto-l33t-haxor is allowed
-+__xml_acl_post_process:	Creation of acl_role=auto-l33t-haxor is allowed
-+__xml_acl_post_process:	Creation of acl_permission=crook-nothing is allowed
-+__xml_acl_post_process:	Creation of acl_target=niceguy is allowed
-+__xml_acl_post_process:	Creation of role=observer is allowed
-+__xml_acl_post_process:	Creation of acl_target=bob is allowed
-+__xml_acl_post_process:	Creation of role­min is allowed
-+__xml_acl_post_process:	Creation of acl_targetºdidea is allowed
-+__xml_acl_post_process:	Creation of role=auto-badidea is allowed
-+__xml_acl_post_process:	Creation of acl_role=auto-badidea is allowed
-+__xml_acl_post_process:	Creation of acl_permissionºdidea-resources is allowed
-+__xml_acl_post_process:	Creation of acl_target¾tteridea is allowed
-+__xml_acl_post_process:	Creation of role=auto-betteridea is allowed
-+__xml_acl_post_process:	Creation of acl_role=auto-betteridea is allowed
-+__xml_acl_post_process:	Creation of acl_permission¾tteridea-nothing is allowed
-+__xml_acl_post_process:	Creation of acl_permission¾tteridea-resources is allowed
- =#=#=#= Current cib after: root: Upgrade to pacemaker-2.0 =#=#=#- <cib epoch="2" \
                num_updates="0" admin_epoch="1">
-   <configuration>
-@@ -1271,10 +1273,10 @@ Error performing operation: Permission denied
- =#=#=#= End test: unknownguy: Set stonith-enabled - Permission denied (13) =#=#=#- \
                * Passed: crm_attribute  - unknownguy: Set stonith-enabled
- =#=#=#= Begin test: unknownguy: Create a resource =#=#=#--__xml_acl_check: \
                Ordinary user unknownguy cannot access the CIB without any defined \
                ACLs
--__xml_acl_check: 	Ordinary user unknownguy cannot access the CIB without any \
                defined ACLs
--__xml_acl_check: 	Ordinary user unknownguy cannot access the CIB without any \
                defined ACLs
--__xml_acl_check: 	Ordinary user unknownguy cannot access the CIB without any \
                defined ACLs
-+__xml_acl_check:	Ordinary user unknownguy cannot access the CIB without any defined \
                ACLs
-+__xml_acl_check:	Ordinary user unknownguy cannot access the CIB without any defined \
                ACLs
-+__xml_acl_check:	Ordinary user unknownguy cannot access the CIB without any defined \
                ACLs
-+__xml_acl_check:	Ordinary user unknownguy cannot access the CIB without any defined \
                ACLs
- Call failed: Permission denied
- =#=#=#= End test: unknownguy: Create a resource - Permission denied (13) =#=#=#- * \
                Passed: cibadmin       - unknownguy: Create a resource
-@@ -1291,8 +1293,8 @@ Error performing operation: Permission denied
- =#=#=#= End test: l33t-haxor: Set stonith-enabled - Permission denied (13) =#=#=#- \
                * Passed: crm_attribute  - l33t-haxor: Set stonith-enabled
- =#=#=#= Begin test: l33t-haxor: Create a resource =#=#=#--__xml_acl_check: 	400 \
                access denied to /cib/configuration/resources/primitive[@id='dummy']: \
                parent
--__xml_acl_post_process: 	Cannot add new node primitive at \
                /cib/configuration/resources/primitive[@id='dummy']
-+__xml_acl_check:	400 access denied to \
                /cib/configuration/resources/primitive[@id='dummy']: parent
-+__xml_acl_post_process:	Cannot add new node primitive at \
                /cib/configuration/resources/primitive[@id='dummy']
- Call failed: Permission denied
- =#=#=#= End test: l33t-haxor: Create a resource - Permission denied (13) =#=#=#- * \
                Passed: cibadmin       - l33t-haxor: Create a resource
-@@ -1351,7 +1353,7 @@ Call failed: Permission denied
- =#=#=#= End test: niceguy: Query configuration - OK (0) =#=#=#- * Passed: cibadmin  \
                - niceguy: Query configuration
- =#=#=#= Begin test: niceguy: Set enable-acl =#=#=#--__xml_acl_check: 	400 access \
denied to /cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-enable-acl'][@value]: \
                default
-+__xml_acl_check:	400 access denied to \
/cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-enable-acl'][@value]: \
                default
- Error performing operation: Permission denied
- Error setting enable-aclúlse (section=crm_config, set=<null>): Permission denied
- =#=#=#= End test: niceguy: Set enable-acl - Permission denied (13) =#=#=#-@@ \
-1412,8 +1414,8 @@ Error setting enable-aclúlse (section=crm_config, set=<null>): \
                Permission deni
- =#=#=#= End test: niceguy: Set stonith-enabled - OK (0) =#=#=#- * Passed: \
                crm_attribute  - niceguy: Set stonith-enabled
- =#=#=#= Begin test: niceguy: Create a resource =#=#=#--__xml_acl_check: 	400 access \
                denied to /cib/configuration/resources/primitive[@id='dummy']: \
                default
--__xml_acl_post_process: 	Cannot add new node primitive at \
                /cib/configuration/resources/primitive[@id='dummy']
-+__xml_acl_check:	400 access denied to \
                /cib/configuration/resources/primitive[@id='dummy']: default
-+__xml_acl_post_process:	Cannot add new node primitive at \
                /cib/configuration/resources/primitive[@id='dummy']
- Call failed: Permission denied
- =#=#=#= End test: niceguy: Create a resource - Permission denied (13) =#=#=#- * \
                Passed: cibadmin       - niceguy: Create a resource
-@@ -1596,10 +1598,11 @@ Error performing operation: Permission denied
- =#=#=#= End test: l33t-haxor: Remove a resource meta attribute - Permission denied \
                (13) =#=#=#- * Passed: crm_resource   - l33t-haxor: Remove a resource \
                meta attribute
- =#=#=#= Begin test: niceguy: Create a resource meta attribute =#=#=#--error: \
unpack_resources: 	Resource start-up disabled since no STONITH resources have been \
                defined
--error: unpack_resources: 	Either configure some or disable STONITH with the \
                stonith-enabled option
--error: unpack_resources: 	NOTE: Clusters with shared data need STONITH to ensure \
                data integrity
--__xml_acl_post_process: 	Creation of nvpair=dummy-meta_attributes-target-role is \
                allowed
-+error: unpack_resources:	Resource start-up disabled since no STONITH resources have \
                been defined
-+error: unpack_resources:	Either configure some or disable STONITH with the \
                stonith-enabled option
-+error: unpack_resources:	NOTE: Clusters with shared data need STONITH to ensure \
                data integrity
-+__xml_acl_post_process:	Creation of nvpair=dummy-meta_attributes-target-role is \
                allowed
-+Set 'dummy' option: id=dummy-meta_attributes-target-role set=dummy-meta_attributes \
                name=target-role=Stopped
- =#=#=#= Current cib after: niceguy: Create a resource meta attribute =#=#=#- <cib \
                epoch="11" num_updates="0" admin_epoch="0">
-   <configuration>
-@@ -1661,9 +1664,9 @@ __xml_acl_post_process: 	Creation of \
                nvpair=dummy-meta_attributes-target-role is
- =#=#=#= End test: niceguy: Create a resource meta attribute - OK (0) =#=#=#- * \
                Passed: crm_resource   - niceguy: Create a resource meta attribute
- =#=#=#= Begin test: niceguy: Query a resource meta attribute =#=#=#--error: \
unpack_resources: 	Resource start-up disabled since no STONITH resources have been \
                defined
--error: unpack_resources: 	Either configure some or disable STONITH with the \
                stonith-enabled option
--error: unpack_resources: 	NOTE: Clusters with shared data need STONITH to ensure \
                data integrity
-+error: unpack_resources:	Resource start-up disabled since no STONITH resources have \
                been defined
-+error: unpack_resources:	Either configure some or disable STONITH with the \
                stonith-enabled option
-+error: unpack_resources:	NOTE: Clusters with shared data need STONITH to ensure \
                data integrity
- Stopped
- =#=#=#= Current cib after: niceguy: Query a resource meta attribute =#=#=#- <cib \
                epoch="11" num_updates="0" admin_epoch="0">
-@@ -1726,10 +1729,10 @@ Stopped
- =#=#=#= End test: niceguy: Query a resource meta attribute - OK (0) =#=#=#- * \
                Passed: crm_resource   - niceguy: Query a resource meta attribute
- =#=#=#= Begin test: niceguy: Remove a resource meta attribute =#=#=#--error: \
unpack_resources: 	Resource start-up disabled since no STONITH resources have been \
                defined
--error: unpack_resources: 	Either configure some or disable STONITH with the \
                stonith-enabled option
--error: unpack_resources: 	NOTE: Clusters with shared data need STONITH to ensure \
                data integrity
--Deleted dummy option: id=dummy-meta_attributes-target-role name=target-role
-+error: unpack_resources:	Resource start-up disabled since no STONITH resources have \
                been defined
-+error: unpack_resources:	Either configure some or disable STONITH with the \
                stonith-enabled option
-+error: unpack_resources:	NOTE: Clusters with shared data need STONITH to ensure \
                data integrity
-+Deleted 'dummy' option: id=dummy-meta_attributes-target-role name=target-role
- =#=#=#= Current cib after: niceguy: Remove a resource meta attribute =#=#=#- <cib \
                epoch="12" num_updates="0" admin_epoch="0">
-   <configuration>
-@@ -1789,10 +1792,11 @@ Deleted dummy option: id=dummy-meta_attributes-target-role \
                name=target-role
- =#=#=#= End test: niceguy: Remove a resource meta attribute - OK (0) =#=#=#- * \
                Passed: crm_resource   - niceguy: Remove a resource meta attribute
- =#=#=#= Begin test: niceguy: Create a resource meta attribute =#=#=#--error: \
unpack_resources: 	Resource start-up disabled since no STONITH resources have been \
                defined
--error: unpack_resources: 	Either configure some or disable STONITH with the \
                stonith-enabled option
--error: unpack_resources: 	NOTE: Clusters with shared data need STONITH to ensure \
                data integrity
--__xml_acl_post_process: 	Creation of nvpair=dummy-meta_attributes-target-role is \
                allowed
-+error: unpack_resources:	Resource start-up disabled since no STONITH resources have \
                been defined
-+error: unpack_resources:	Either configure some or disable STONITH with the \
                stonith-enabled option
-+error: unpack_resources:	NOTE: Clusters with shared data need STONITH to ensure \
                data integrity
-+__xml_acl_post_process:	Creation of nvpair=dummy-meta_attributes-target-role is \
                allowed
-+Set 'dummy' option: id=dummy-meta_attributes-target-role set=dummy-meta_attributes \
                name=target-role=Started
- =#=#=#= Current cib after: niceguy: Create a resource meta attribute =#=#=#- <cib \
                epoch="13" num_updates="0" admin_epoch="0">
-   <configuration>
-@@ -1903,8 +1907,8 @@ __xml_acl_post_process: 	Creation of \
                nvpair=dummy-meta_attributes-target-role is
-   <status/>
- </cib>
- =#=#=#= Begin test: niceguy: Replace - remove acls =#=#=#--__xml_acl_check: 	400 \
                access denied to /cib[@epoch]: default
--__xml_acl_check: 	400 access denied to /cib/configuration/acls: default
-+__xml_acl_check:	400 access denied to /cib[@epoch]: default
-+__xml_acl_check:	400 access denied to /cib/configuration/acls: default
- Call failed: Permission denied
- =#=#=#= End test: niceguy: Replace - remove acls - Permission denied (13) =#=#=#- * \
                Passed: cibadmin       - niceguy: Replace - remove acls
-@@ -1967,9 +1971,9 @@ Call failed: Permission denied
-   <status/>
- </cib>
- =#=#=#= Begin test: niceguy: Replace - create resource =#=#=#--__xml_acl_check: \
                400 access denied to /cib[@epoch]: default
--__xml_acl_check: 	400 access denied to \
                /cib/configuration/resources/primitive[@id='dummy2']: default
--__xml_acl_post_process: 	Cannot add new node primitive at \
                /cib/configuration/resources/primitive[@id='dummy2']
-+__xml_acl_check:	400 access denied to /cib[@epoch]: default
-+__xml_acl_check:	400 access denied to \
                /cib/configuration/resources/primitive[@id='dummy2']: default
-+__xml_acl_post_process:	Cannot add new node primitive at \
                /cib/configuration/resources/primitive[@id='dummy2']
- Call failed: Permission denied
- =#=#=#= End test: niceguy: Replace - create resource - Permission denied (13) \
                =#=#=#- * Passed: cibadmin       - niceguy: Replace - create resource
-@@ -2031,8 +2035,8 @@ Call failed: Permission denied
-   <status/>
- </cib>
- =#=#=#= Begin test: niceguy: Replace - modify attribute (deny) \
                =#=#=#--__xml_acl_check: 	400 access denied to /cib[@epoch]: default
--__xml_acl_check: 	400 access denied to \
/cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-enable-acl'][@value]: \
                default
-+__xml_acl_check:	400 access denied to /cib[@epoch]: default
-+__xml_acl_check:	400 access denied to \
/cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-enable-acl'][@value]: \
                default
- Call failed: Permission denied
- =#=#=#= End test: niceguy: Replace - modify attribute (deny) - Permission denied \
                (13) =#=#=#- * Passed: cibadmin       - niceguy: Replace - modify \
                attribute (deny)
-@@ -2094,8 +2098,8 @@ Call failed: Permission denied
-   <status/>
- </cib>
- =#=#=#= Begin test: niceguy: Replace - delete attribute (deny) \
                =#=#=#--__xml_acl_check: 	400 access denied to /cib[@epoch]: default
--__xml_acl_check: 	400 access denied to \
/cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-enable-acl']: \
                default
-+__xml_acl_check:	400 access denied to /cib[@epoch]: default
-+__xml_acl_check:	400 access denied to \
/cib/configuration/crm_config/cluster_property_set[@id='cib-bootstrap-options']/nvpair[@id='cib-bootstrap-options-enable-acl']: \
                default
- Call failed: Permission denied
- =#=#=#= End test: niceguy: Replace - delete attribute (deny) - Permission denied \
                (13) =#=#=#- * Passed: cibadmin       - niceguy: Replace - delete \
                attribute (deny)
-@@ -2157,8 +2161,8 @@ Call failed: Permission denied
-   <status/>
- </cib>
- =#=#=#= Begin test: niceguy: Replace - create attribute (deny) \
                =#=#=#--__xml_acl_check: 	400 access denied to /cib[@epoch]: default
--__xml_acl_check: 	400 access denied to \
                /cib/configuration/resources/primitive[@id='dummy'][@description]: \
                default
-+__xml_acl_check:	400 access denied to /cib[@epoch]: default
-+__xml_acl_check:	400 access denied to \
                /cib/configuration/resources/primitive[@id='dummy'][@description]: \
                default
- Call failed: Permission denied
- =#=#=#= End test: niceguy: Replace - create attribute (deny) - Permission denied \
                (13) =#=#=#- * Passed: cibadmin       - niceguy: Replace - create \
                attribute (deny)
-diff --git a/tools/regression.tools.exp b/tools/regression.tools.exp
-index 287caf9..b2f4df1 100644
---- a/tools/regression.tools.exp
-+++ b/tools/regression.tools.exp
-@@ -626,6 +626,7 @@ Deleted nodes attribute: id=nodes-node1-standby name=standby
- =#=#=#= End test: Create a resource - OK (0) =#=#=#- * Passed: cibadmin       - \
                Create a resource
- =#=#=#= Begin test: Create a resource meta attribute =#=#=#-+Set 'dummy' option: \
                id=dummy-meta_attributes-is-managed set=dummy-meta_attributes \
                name=is-managedúlse
- =#=#=#= Current cib after: Create a resource meta attribute =#=#=#- <cib epoch="15" \
                num_updates="0" admin_epoch="1">
-   <configuration>
-@@ -695,7 +696,7 @@ false
- =#=#=#= End test: Query a resource meta attribute - OK (0) =#=#=#- * Passed: \
                crm_resource   - Query a resource meta attribute
- =#=#=#= Begin test: Remove a resource meta attribute =#=#=#--Deleted dummy option: \
                id=dummy-meta_attributes-is-managed name=is-managed
-+Deleted 'dummy' option: id=dummy-meta_attributes-is-managed name=is-managed
- =#=#=#= Current cib after: Remove a resource meta attribute =#=#=#- <cib epoch="16" \
                num_updates="0" admin_epoch="1">
-   <configuration>
-@@ -728,6 +729,7 @@ Deleted dummy option: id=dummy-meta_attributes-is-managed \
                name=is-managed
- =#=#=#= End test: Remove a resource meta attribute - OK (0) =#=#=#- * Passed: \
                crm_resource   - Remove a resource meta attribute
- =#=#=#= Begin test: Create a resource attribute =#=#=#-+Set 'dummy' option: \
                id=dummy-instance_attributes-delay set=dummy-instance_attributes \
                nameÞlays
- =#=#=#= Current cib after: Create a resource attribute =#=#=#- <cib epoch="17" \
                num_updates="0" admin_epoch="1">
-   <configuration>
-@@ -763,7 +765,7 @@ Deleted dummy option: id=dummy-meta_attributes-is-managed \
                name=is-managed
- =#=#=#= End test: Create a resource attribute - OK (0) =#=#=#- * Passed: \
                crm_resource   - Create a resource attribute
- =#=#=#= Begin test: List the configured resources =#=#=#-- \
                dummy	(ocf::pacemaker:Dummy):	Stopped
-+ dummy	(ocf::pacemaker:Dummy):	Stopped
- =#=#=#= Current cib after: List the configured resources =#=#=#- <cib epoch="17" \
                num_updates="0" admin_epoch="1">
-   <configuration>
-@@ -973,8 +975,8 @@ Error performing operation: No such device or address
- Current cluster status:
- Online: [ node1 ]
-
-- dummy	(ocf::pacemaker:Dummy):	Stopped
-- Fence	(stonith:fence_true):	Stopped
-+ dummy	(ocf::pacemaker:Dummy):	Stopped
-+ Fence	(stonith:fence_true):	Stopped
-
- Transition Summary:
-  * Start   dummy	(node1)
-@@ -990,8 +992,8 @@ Executing cluster transition:
- Revised cluster status:
- Online: [ node1 ]
-
-- dummy	(ocf::pacemaker:Dummy):	Started node1
-- Fence	(stonith:fence_true):	Started node1
-+ dummy	(ocf::pacemaker:Dummy):	Started node1
-+ Fence	(stonith:fence_true):	Started node1
-
- =#=#=#= Current cib after: Bring resources online =#=#=#- <cib epoch="18" \
                num_updates="4" admin_epoch="1">
-@@ -1710,8 +1712,8 @@ Error performing operation: No such device or address
- Current cluster status:
- Online: [ node1 ]
-
-- dummy	(ocf::pacemaker:Dummy):	Started node1
-- Fence	(stonith:fence_true):	Started node1
-+ dummy	(ocf::pacemaker:Dummy):	Started node1
-+ Fence	(stonith:fence_true):	Started node1
-
- Performing requested modifications
-  + Bringing node node2 online
-@@ -1733,8 +1735,8 @@ Executing cluster transition:
- Revised cluster status:
- Online: [ node1 node2 node3 ]
-
-- dummy	(ocf::pacemaker:Dummy):	Started node1
-- Fence	(stonith:fence_true):	Started node2
-+ dummy	(ocf::pacemaker:Dummy):	Started node1
-+ Fence	(stonith:fence_true):	Started node2
-
- =#=#=#= Current cib after: Create two more nodes and bring them online =#=#=#- <cib \
                epoch="22" num_updates="8" admin_epoch="1">
-@@ -1996,8 +1998,8 @@ WARNING: Creating rsc_location constraint \
                'cli-ban-dummy-on-node2' with a score
- Current cluster status:
- Online: [ node1 node2 node3 ]
-
-- dummy	(ocf::pacemaker:Dummy):	Started node1
-- Fence	(stonith:fence_true):	Started node2
-+ dummy	(ocf::pacemaker:Dummy):	Started node1
-+ Fence	(stonith:fence_true):	Started node2
-
- Transition Summary:
-  * Move    dummy	(Started node1 -> node3)
-@@ -2010,8 +2012,8 @@ Executing cluster transition:
- Revised cluster status:
- Online: [ node1 node2 node3 ]
-
-- dummy	(ocf::pacemaker:Dummy):	Started node3
-- Fence	(stonith:fence_true):	Started node2
-+ dummy	(ocf::pacemaker:Dummy):	Started node3
-+ Fence	(stonith:fence_true):	Started node2
-
- =#=#=#= Current cib after: Relocate resources due to ban =#=#=#- <cib epoch="24" \
                num_updates="2" admin_epoch="1">
-diff --git a/valgrind-pcmk.suppressions b/valgrind-pcmk.suppressions
-index 2e382df..0a47096 100644
---- a/valgrind-pcmk.suppressions
-+++ b/valgrind-pcmk.suppressions
-@@ -1,4 +1,4 @@
--# Valgrind suppressions for PE testing
-+# Valgrind suppressions for Pacemaker testing
- {
-    Valgrind bug
-    Memcheck:Addr8
-@@ -57,6 +57,15 @@
- }
-
- {
-+   Cman - Who cares if unused bytes are uninitialized
-+   Memcheck:Param
-+   sendmsg(msg)
-+   fun:__sendmsg_nocancel
-+   obj:*/libcman.so.3.0
-+   obj:*/libcman.so.3.0
-+}
-+
-+{
-    Cman - Jump or move depends on uninitialized values
-    Memcheck:Cond
-    obj:*/libcman.so.3.0
diff --git a/pacemaker-rollup-7-1-3d781d3.patch b/pacemaker-rollup-7-1-3d781d3.patch
deleted file mode 100644
index 30afd6d..0000000
--- a/pacemaker-rollup-7-1-3d781d3.patch
+++ /dev/null
@@ -1,7989 +0,0 @@
-diff --git a/cib/io.c b/cib/io.c
-index e2873a8..4e2b24a 100644
---- a/cib/io.c
-+++ b/cib/io.c
-@@ -254,9 +254,7 @@ readCibXmlFile(const char *dir, const char *file, gboolean \
                discard_status)
-     if (cib_writes_enabled && use_valgrind) {
-         if (crm_is_true(use_valgrind) || strstr(use_valgrind, "cib")) {
-             cib_writes_enabled = FALSE;
--            crm_err("*********************************************************");
-             crm_err("*** Disabling disk writes to avoid confusing Valgrind ***");
--            crm_err("*********************************************************");
-         }
-     }
-
-diff --git a/crmd/crmd_lrm.h b/crmd/crmd_lrm.h
-index 81a53c5..78432df 100644
---- a/crmd/crmd_lrm.h
-+++ b/crmd/crmd_lrm.h
-@@ -37,6 +37,8 @@ typedef struct resource_history_s {
-     GHashTable *stop_params;
- } rsc_history_t;
-
-+void history_free(gpointer data);
-+
- /* TDOD - Replace this with lrmd_event_data_t */
- struct recurring_op_s {
-     int call_id;
-diff --git a/crmd/lrm.c b/crmd/lrm.c
-index 062f769..418e7cf 100644
---- a/crmd/lrm.c
-+++ b/crmd/lrm.c
-@@ -103,6 +103,80 @@ copy_meta_keys(gpointer key, gpointer value, gpointer \
                user_data)
-     }
- }
-
-+/*
-+ * \internal
-+ * \brief Remove a recurring operation from a resource's history
-+ *
-+ * \param[in,out] history  Resource history to modify
-+ * \param[in]     op       Operation to remove
-+ *
-+ * \return TRUE if the operation was found and removed, FALSE otherwise
-+ */
-+static gboolean
-+history_remove_recurring_op(rsc_history_t *history, const lrmd_event_data_t *op)
-+{
-+    GList *iter;
-+
-+    for (iter = history->recurring_op_list; iter != NULL; iter = iter->next) {
-+        lrmd_event_data_t *existing = iter->data;
-+
-+        if ((op->interval == existing->interval)
-+            && crm_str_eq(op->rsc_id, existing->rsc_id, TRUE)
-+            && safe_str_eq(op->op_type, existing->op_type)) {
-+
-+            history->recurring_op_list = \
                g_list_delete_link(history->recurring_op_list, iter);
-+            lrmd_free_event(existing);
-+            return TRUE;
-+        }
-+    }
-+    return FALSE;
-+}
-+
-+/*
-+ * \internal
-+ * \brief Free all recurring operations in resource history
-+ *
-+ * \param[in,out] history  Resource history to modify
-+ */
-+static void
-+history_free_recurring_ops(rsc_history_t *history)
-+{
-+    GList *iter;
-+
-+    for (iter = history->recurring_op_list; iter != NULL; iter = iter->next) {
-+        lrmd_free_event(iter->data);
-+    }
-+    g_list_free(history->recurring_op_list);
-+    history->recurring_op_list = NULL;
-+}
-+
-+/*
-+ * \internal
-+ * \brief Free resource history
-+ *
-+ * \param[in,out] history  Resource history to free
-+ */
-+void
-+history_free(gpointer data)
-+{
-+    rsc_history_t *history = (rsc_history_t*)data;
-+
-+    if (history->stop_params) {
-+        g_hash_table_destroy(history->stop_params);
-+    }
-+
-+    /* Don't need to free history->rsc.id because it's set to history->id */
-+    free(history->rsc.type);
-+    free(history->rsc.class);
-+    free(history->rsc.provider);
-+
-+    lrmd_free_event(history->failed);
-+    lrmd_free_event(history->last);
-+    free(history->id);
-+    history_free_recurring_ops(history);
-+    free(history);
-+}
-+
- static void
- update_history_cache(lrm_state_t * lrm_state, lrmd_rsc_info_t * rsc, \
                lrmd_event_data_t * op)
- {
-@@ -145,25 +219,10 @@ update_history_cache(lrm_state_t * lrm_state, lrmd_rsc_info_t \
                * rsc, lrmd_event_
-     target_rc = rsc_op_expected_rc(op);
-     if (op->op_status == PCMK_LRM_OP_CANCELLED) {
-         if (op->interval > 0) {
--            GList *gIter, *gIterNext;
--
-             crm_trace("Removing cancelled recurring op: %s_%s_%d", op->rsc_id, \
                op->op_type,
-                       op->interval);
--
--            for (gIter = entry->recurring_op_list; gIter != NULL; gIter = \
                gIterNext) {
--                lrmd_event_data_t *existing = gIter->data;
--
--                gIterNext = gIter->next;
--
--                if (crm_str_eq(op->rsc_id, existing->rsc_id, TRUE)
--                    && safe_str_eq(op->op_type, existing->op_type)
--                    && op->interval == existing->interval) {
--                    lrmd_free_event(existing);
--                    entry->recurring_op_list = \
                g_list_delete_link(entry->recurring_op_list, gIter);
--                }
--            }
-+            history_remove_recurring_op(entry, op);
-             return;
--
-         } else {
-             crm_trace("Skipping %s_%s_%d rc=%d, status=%d", op->rsc_id, \
                op->op_type, op->interval,
-                       op->rc, op->op_status);
-@@ -201,32 +260,17 @@ update_history_cache(lrm_state_t * lrm_state, lrmd_rsc_info_t \
                * rsc, lrmd_event_
-     }
-
-     if (op->interval > 0) {
--        GListPtr iter = NULL;
--
--        for(iter = entry->recurring_op_list; iter; iter = iter->next) {
--            lrmd_event_data_t *o = iter->data;
--
--            /* op->rsc_id is implied */
--            if(op->interval == o->interval && strcmp(op->op_type, o->op_type) == 0) \
                {
--                crm_trace("Removing existing recurring op entry: %s_%s_%d", \
                op->rsc_id, op->op_type, op->interval);
--                entry->recurring_op_list = g_list_remove(entry->recurring_op_list, \
                o);
--                break;
--            }
--        }
-+        /* Ensure there are no duplicates */
-+        history_remove_recurring_op(entry, op);
-
-         crm_trace("Adding recurring op: %s_%s_%d", op->rsc_id, op->op_type, \
                op->interval);
-         entry->recurring_op_list = g_list_prepend(entry->recurring_op_list, \
                lrmd_copy_event(op));
-
-     } else if (entry->recurring_op_list && safe_str_eq(op->op_type, RSC_STATUS) == \
                FALSE) {
--        GList *gIter = entry->recurring_op_list;
--
-         crm_trace("Dropping %d recurring ops because of: %s_%s_%d",
--                  g_list_length(gIter), op->rsc_id, op->op_type, op->interval);
--        for (; gIter != NULL; gIter = gIter->next) {
--            lrmd_free_event(gIter->data);
--        }
--        g_list_free(entry->recurring_op_list);
--        entry->recurring_op_list = NULL;
-+                  g_list_length(entry->recurring_op_list), op->rsc_id,
-+                  op->op_type, op->interval);
-+        history_free_recurring_ops(entry);
-     }
- }
-
-diff --git a/crmd/lrm_state.c b/crmd/lrm_state.c
-index 374c806..162ad03 100644
---- a/crmd/lrm_state.c
-+++ b/crmd/lrm_state.c
-@@ -32,24 +32,6 @@ int lrmd_internal_proxy_send(lrmd_t * lrmd, xmlNode *msg);
- void lrmd_internal_set_proxy_callback(lrmd_t * lrmd, void *userdata, void \
                (*callback)(lrmd_t *lrmd, void *userdata, xmlNode *msg));
-
- static void
--history_cache_destroy(gpointer data)
--{
--    rsc_history_t *entry = data;
--
--    if (entry->stop_params) {
--        g_hash_table_destroy(entry->stop_params);
--    }
--
--    free(entry->rsc.type);
--    free(entry->rsc.class);
--    free(entry->rsc.provider);
--
--    lrmd_free_event(entry->failed);
--    lrmd_free_event(entry->last);
--    free(entry->id);
--    free(entry);
--}
--static void
- free_rsc_info(gpointer value)
- {
-     lrmd_rsc_info_t *rsc_info = value;
-@@ -155,7 +137,7 @@ lrm_state_create(const char *node_name)
-                                                g_str_equal, g_hash_destroy_str, \
                free_recurring_op);
-
-     state->resource_history = g_hash_table_new_full(crm_str_hash,
--                                                    g_str_equal, NULL, \
                history_cache_destroy);
-+                                                    g_str_equal, NULL, \
                history_free);
-
-     g_hash_table_insert(lrm_state_table, (char *)state->node_name, state);
-     return state;
-diff --git a/cts/CM_ais.py b/cts/CM_ais.py
-index 44f91cd..a34f9b1 100644
---- a/cts/CM_ais.py
-+++ b/cts/CM_ais.py
-@@ -49,42 +49,46 @@ class crm_ais(crm_lha):
-     def NodeUUID(self, node):
-         return node
-
--    def ais_components(self):
-+    def ais_components(self, extra={}):
-
-         complist = []
-         if not len(self.fullcomplist.keys()):
-             for c in ["cib", "lrmd", "crmd", "attrd" ]:
--               self.fullcomplist[c] = Process(
--                   self, c,
--                   pats = self.templates.get_component(self.name, c),
--                   badnews_ignore = self.templates.get_component(self.name, \
                "%s-ignore"%c),
--                   common_ignore = self.templates.get_component(self.name, \
                "common-ignore"))
--
--               self.fullcomplist["pengine"] = Process(
--                   self, "pengine",
--                   dc_pats = self.templates.get_component(self.name, "pengine"),
--                   badnews_ignore = self.templates.get_component(self.name, \
                "pengine-ignore"),
--                   common_ignore = self.templates.get_component(self.name, \
                "common-ignore"))
--
--               self.fullcomplist["stonith-ng"] = Process(
--                   self, "stonith-ng", process="stonithd",
--                   pats = self.templates.get_component(self.name, "stonith"),
--                   badnews_ignore = self.templates.get_component(self.name, \
                "stonith-ignore"),
--                   common_ignore = self.templates.get_component(self.name, \
                "common-ignore"))
--
-+                self.fullcomplist[c] = Process(
-+                    self, c,
-+                    pats = self.templates.get_component(self.name, c),
-+                    badnews_ignore = self.templates.get_component(self.name, \
                "%s-ignore" % c),
-+                    common_ignore = self.templates.get_component(self.name, \
                "common-ignore"))
-+
-+            # pengine uses dc_pats instead of pats
-+            self.fullcomplist["pengine"] = Process(
-+                self, "pengine",
-+                dc_pats = self.templates.get_component(self.name, "pengine"),
-+                badnews_ignore = self.templates.get_component(self.name, \
                "pengine-ignore"),
-+                common_ignore = self.templates.get_component(self.name, \
                "common-ignore"))
-+
-+            # stonith-ng's process name is different from its component name
-+            self.fullcomplist["stonith-ng"] = Process(
-+                self, "stonith-ng", process="stonithd",
-+                pats = self.templates.get_component(self.name, "stonith"),
-+                badnews_ignore = self.templates.get_component(self.name, \
                "stonith-ignore"),
-+                common_ignore = self.templates.get_component(self.name, \
                "common-ignore"))
-+
-+            # add (or replace) any extra components passed in
-+            self.fullcomplist.update(extra)
-+
-+        # Processes running under valgrind can't be shot with "killall -9 \
                processname",
-+        # so don't include them in the returned list
-         vgrind = self.Env["valgrind-procs"].split()
-         for key in self.fullcomplist.keys():
-             if self.Env["valgrind-tests"]:
--               if key in vgrind:
--               # Processes running under valgrind can't be shot with "killall -9 \
                processname"
-+                if key in vgrind:
-                     self.log("Filtering %s from the component list as it is being \
                profiled by valgrind" % key)
-                     continue
-             if key == "stonith-ng" and not self.Env["DoFencing"]:
-                 continue
--
-             complist.append(self.fullcomplist[key])
-
--        #self.complist = [ fullcomplist["pengine"] ]
-         return complist
-
-
-@@ -100,17 +104,14 @@ class crm_cs_v0(crm_ais):
-         crm_ais.__init__(self, Environment, randseed=randseed, name=name)
-
-     def Components(self):
--        self.ais_components()
--        c = "corosync"
--
--        self.fullcomplist[c] = Process(
--            self, c,
--            pats = self.templates.get_component(self.name, c),
--            badnews_ignore = self.templates.get_component(self.name, \
                "%s-ignore"%c),
-+        extra = {}
-+        extra["corosync"] = Process(
-+            self, "corosync",
-+            pats = self.templates.get_component(self.name, "corosync"),
-+            badnews_ignore = self.templates.get_component(self.name, \
                "corosync-ignore"),
-             common_ignore = self.templates.get_component(self.name, \
                "common-ignore")
-         )
--
--        return self.ais_components()
-+        return self.ais_components(extra=extra)
-
-
- class crm_cs_v1(crm_cs_v0):
-diff --git a/cts/environment.py b/cts/environment.py
-index a3399c3..61d4211 100644
---- a/cts/environment.py
-+++ b/cts/environment.py
-@@ -59,7 +59,7 @@ class Environment:
-         self["stonith-params"] = "hostlist=all,livedangerously=yes"
-         self["loop-minutes"] = 60
-         self["valgrind-prefix"] = None
--        self["valgrind-procs"] = "cib crmd attrd pengine stonith-ng"
-+        self["valgrind-procs"] = "attrd cib crmd lrmd pengine stonith-ng"
-         self["valgrind-opts"] = """--leak-check=full --show-reachable=yes \
--trace-children=no --num-callers% --gen-suppressions=all \
                --suppressions="""+CTSvars.CTS_home+"""/cts.supp"""
-
-         self["experimental-tests"] = 0
-@@ -578,6 +578,10 @@ class Environment:
-             elif args[i] == "--valgrind-tests":
-                 self["valgrind-tests"] = 1
-
-+            elif args[i] == "--valgrind-procs":
-+                self["valgrind-procs"] = args[i+1]
-+                skipthis = 1
-+
-             elif args[i] == "--no-loop-tests":
-                 self["loop-tests"] = 0
-
-diff --git a/cts/patterns.py b/cts/patterns.py
-index 1bc05a6..493b690 100644
---- a/cts/patterns.py
-+++ b/cts/patterns.py
-@@ -7,7 +7,9 @@ class BasePatterns:
-     def __init__(self, name):
-         self.name = name
-         patternvariants[name] = self
--        self.ignore = []
-+        self.ignore = [
-+            "avoid confusing Valgrind",
-+        ]
-         self.BadNews = []
-         self.components = {}
-         self.commands = {
-@@ -140,7 +142,7 @@ class crm_lha(BasePatterns):
-                 r"Parameters to .* changed",
-             ]
-
--        self.ignore = [
-+        self.ignore = self.ignore + [
-                 r"(ERROR|error):.*\s+assert\s+at\s+crm_glib_handler:"
-                 "(ERROR|error): Message hist queue is filling up",
-                 "stonithd.*CRIT: external_hostlist:.*'vmware gethosts' returned an \
                empty hostlist",
-@@ -177,7 +179,7 @@ class crm_cs_v0(BasePatterns):
-             "Pat:PacemakerUp"  : "%s\W.*pacemakerd.*Starting Pacemaker",
-         })
-
--        self.ignore = [
-+        self.ignore = self.ignore + [
-             r"crm_mon:",
-             r"crmadmin:",
-             r"update_trace_data",
-diff --git a/extra/ansible/docker/group_vars/all \
                b/extra/ansible/docker/group_vars/all
-new file mode 100644
-index 0000000..935e88a
---- /dev/null
-+++ b/extra/ansible/docker/group_vars/all
-@@ -0,0 +1,5 @@
-+max: 4
-+prefix: ansible-pcmk
-+base_image: centos:centos7
-+subnet: 172.17.200
-+pacemaker_authkey: this_is_very_insecure
-\ No newline at end of file
-diff --git a/extra/ansible/docker/hosts b/extra/ansible/docker/hosts
-new file mode 100644
-index 0000000..5b0fb71
---- /dev/null
-+++ b/extra/ansible/docker/hosts
-@@ -0,0 +1,7 @@
-+[controllers]
-+oss-uk-1.clusterlabs.org
-+
-+[containers]
-+ansible-1
-+ansible-2
-+ansible-3
-diff --git a/extra/ansible/docker/roles/docker-host/files/docker-enter \
                b/extra/ansible/docker/roles/docker-host/files/docker-enter
-new file mode 100644
-index 0000000..04c4822
---- /dev/null
-+++ b/extra/ansible/docker/roles/docker-host/files/docker-enter
-@@ -0,0 +1,29 @@
-+#! /bin/sh -e
-+
-+case "$1" in
-+  -h|--help)
-+    echo "Usage: docker-enter CONTAINER [COMMAND]"
-+    exit 0
-+    ;;
-+esac
-+
-+if [ $(id -ru) -ne 0 ]; then
-+  echo "You have to be root."
-+  exit 1
-+fi
-+
-+if [ $# -eq 0 ]; then
-+  echo "Usage: docker-enter CONTAINER [COMMAND]"
-+  exit 1
-+fi
-+
-+container=$1; shift
-+PID=$(docker inspect --format {{.State.Pid}} "$container")
-+
-+if [ $# -ne 0 ]; then
-+   nsenter --target $PID --mount --uts --ipc --net --pid -- $*
-+   exit $?
-+fi
-+
-+nsenter --target $PID --mount --uts --ipc --net --pid
-+exit 0
-diff --git a/extra/ansible/docker/roles/docker-host/files/fence_docker_cts \
                b/extra/ansible/docker/roles/docker-host/files/fence_docker_cts
-new file mode 100644
-index 0000000..6d6f025
---- /dev/null
-+++ b/extra/ansible/docker/roles/docker-host/files/fence_docker_cts
-@@ -0,0 +1,202 @@
-+#!/bin/bash
-+#
-+# Copyright (c) 2014 David Vossel <dvossel@redhat.com>
-+#					All Rights Reserved.
-+#
-+# This program is free software; you can redistribute it and/or modify
-+# it under the terms of version 2 of the GNU General Public License as
-+# published by the Free Software Foundation.
-+#
-+# This program is distributed in the hope that it would be useful, but
-+# WITHOUT ANY WARRANTY; without even the implied warranty of
-+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
-+#
-+# Further, this software is distributed without any warranty that it is
-+# free of the rightful claim of any third person regarding infringement
-+# or the like.  Any license provided herein, whether implied or
-+# otherwise, applies only to this software file.  Patent licenses, if
-+# any, provided herein do not apply to combinations of this program with
-+# other software, or any other product whatsoever.
-+#
-+# You should have received a copy of the GNU General Public License
-+# along with this program; if not, write the Free Software Foundation,
-+# Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
-+#
-+#######################################################################
-+
-+port=""
-+action="list"		 # Default fence action
-+
-+function usage()
-+{
-+cat <<EOF
-+`basename $0` - A fencing agent for docker containers for testing purposes
-+
-+Usage: `basename $0` -o|--action [-n|--port] [options]
-+Options:
-+ -h, --help 		This text
-+ -V, --version		Version information
-+
-+Commands:
-+ -o, --action		Action to perform: on|off|reboot|status|monitor
-+ -n, --port 		The name of a container to control/check
-+
-+EOF
-+	exit 0;
-+}
-+
-+function metadata()
-+{
-+cat <<EOF
-+<?xml version="1.0" ?>
-+<resource-agent name="fence_docker_cts" shortdesc="docker fencing agent for testing \
                purposes" >
-+	<longdesc>
-+		fence_docker_cts fences docker containers for testing purposes.
-+	</longdesc>
-+	<parameters>
-+	<parameter name="action" unique="1" required="0">
-+		<getopt mixed="-o, --action=[action]" />
-+		<content type="string" default="reboot" />
-+		<shortdesc lang="en">Fencing Action</shortdesc>
-+	</parameter>
-+	<parameter name="port" unique="1" required="0">
-+		<getopt mixed="-n, --port=[port]" />
-+		<content type="string" />
-+		<shortdesc lang="en">The name/id of docker container to control/check</shortdesc>
-+	</parameter>
-+	</parameters>
-+	<actions>
-+	<action name="on" />
-+	<action name="off" />
-+	<action name="reboot" />
-+	<action name="status" />
-+	<action name="list" />
-+	<action name="monitor" />
-+	<action name="metadata" />
-+	</actions>
-+</resource-agent>
-+EOF
-+	exit 0;
-+}
-+
-+function docker_log() {
-+	if ! [ "$action" = "list" ]; then
-+		printf "$*\n" 1>&2
-+	fi
-+}
-+
-+# stdin option processing
-+if [ -z $1 ]; then
-+	# If there are no command line args, look for options from stdin
-+	while read line; do
-+		for word in $(echo "$line"); do
-+			case $word in
-+			option=*|action=*) action=`echo $word | sed s/.*=//`;;
-+			port=*)			port=`echo $word | sed s/.*=//`;;
-+			node=*)			port=`echo $word | sed s/.*=//`;;
-+			nodename=*)			port=`echo $word | sed s/.*=//`;;
-+			--);;
-+			*) docker_log "Invalid command: $word";;
-+			esac
-+		done
-+	done
-+fi
-+
-+# Command line option processing
-+while true ; do
-+	if [ -z "$1" ]; then
-+		break;
-+	fi
-+	case "$1" in
-+	-o|--action|--option) action=$2;	shift; shift;;
-+	-n|--port)			port=$2;	  shift; shift;;
-+	-V|--version) echo "1.0.0"; exit 0;;
-+	--help|-h)
-+		usage;
-+		exit 0;;
-+	--) shift ; break ;;
-+	*) docker_log "Unknown option: $1. See --help for details."; exit 1;;
-+	esac
-+done
-+
-+action=`echo $action | tr 'A-Z' 'a-z'`
-+case $action in
-+	hostlist|list) action=list;;
-+	stat|status)   action=status;;
-+	restart|reboot|reset)  action=reboot;;
-+	poweron|on)	action=start;;
-+	poweroff|off)  action=stop;;
-+esac
-+
-+function fence_done()
-+{
-+	if [ $1 -eq 0 ]; then
-+		docker_log "Operation $action (port=$port) passed"
-+	else
-+		docker_log "Operation $action (port=$port) failed: $1"
-+	fi
-+	if [ -z "$returnfile" ]; then
-+		rm -f $returnfile
-+	fi
-+	if [ -z "$helperscript" ]; then
-+		rm -f $helperscript
-+	fi
-+	exit $1
-+}
-+
-+case $action in
-+	metadata) metadata;;
-+esac
-+
-+returnfile=$(mktemp /tmp/fence_docker_cts_returnfileXXXX)
-+returnstring=""
-+helper_script=$(mktemp /tmp/fence_docker_cts_helperXXXX)
-+
-+exec_action()
-+{
-+	echo "#!/bin/bash" > $helper_script
-+	echo "sleep 10000" >> $helper_script
-+	chmod 755 $helper_script
-+	src="$(uname -n)"
-+
-+	$helper_script "$src" "$action" "$returnfile" "$port" > /dev/null 2>&1 &
-+	pid=$!
-+	docker_log "waiting on pid $pid"
-+	wait $pid > /dev/null 2>&1
-+	returnstring=$(cat $returnfile)
-+
-+	if [ -z "$returnstring" ]; then
-+		docker_log "fencing daemon did not respond"
-+		fence_done 1
-+	fi
-+
-+	if [ "$returnstring" == "fail" ]; then
-+		docker_log "fencing daemon failed to execute action [$action on port $port]"
-+		fence_done 1
-+	fi
-+
-+	return 0
-+}
-+
-+exec_action
-+case $action in
-+	list)
-+		cat $returnfile
-+		fence_done 0
-+		;;
-+
-+	status)
-+		# 0 if container is on
-+		# 1 if container can not be contacted or unknown
-+		# 2 if container is off
-+		if [ "$returnstring" = "true" ]; then
-+			fence_done 0
-+		else
-+			fence_done 2
-+		fi
-+		;;
-+	monitor|stop|start|reboot) : ;;
-+	*) docker_log "Unknown action: $action"; fence_done 1;;
-+esac
-+
-+fence_done $?
-diff --git a/extra/ansible/docker/roles/docker-host/files/launch.sh \
                b/extra/ansible/docker/roles/docker-host/files/launch.sh
-new file mode 100644
-index 0000000..66bebf4
---- /dev/null
-+++ b/extra/ansible/docker/roles/docker-host/files/launch.sh
-@@ -0,0 +1,4 @@
-+#!/bin/bash
-+while true; do
-+	sleep 1
-+done
-diff --git a/extra/ansible/docker/roles/docker-host/files/pcmk_remote_start \
                b/extra/ansible/docker/roles/docker-host/files/pcmk_remote_start
-new file mode 100644
-index 0000000..1bf0320
---- /dev/null
-+++ b/extra/ansible/docker/roles/docker-host/files/pcmk_remote_start
-@@ -0,0 +1,18 @@
-+#!/bin/bash
-+/usr/sbin/ip_start
-+pid=$(pidof pacemaker_remoted)
-+if [ "$?" -ne 0 ];  then
-+	mkdir -p /var/run
-+
-+	export PCMK_debugfile=$pcmklogs
-+	(pacemaker_remoted &) & > /dev/null 2>&1
-+	sleep 5
-+
-+	pid=$(pidof pacemaker_remoted)
-+	if [ "$?" -ne 0 ]; then
-+		echo "startup of pacemaker failed"
-+		exit 1
-+	fi
-+	echo "$pid" > /var/run/pacemaker_remoted.pid
-+fi
-+exit 0
-diff --git a/extra/ansible/docker/roles/docker-host/files/pcmk_remote_stop \
                b/extra/ansible/docker/roles/docker-host/files/pcmk_remote_stop
-new file mode 100644
-index 0000000..074cd59
---- /dev/null
-+++ b/extra/ansible/docker/roles/docker-host/files/pcmk_remote_stop
-@@ -0,0 +1,36 @@
-+#!/bin/bash
-+status()
-+{
-+	pid=$(pidof $1 2>/dev/null)
-+	rtrn=$?
-+	if [ $rtrn -ne 0 ]; then
-+		echo "$1 is stopped"
-+	else
-+		echo "$1 (pid $pid) is running..."
-+	fi
-+	return $rtrn
-+}
-+stop()
-+{
-+	desc="Pacemaker Remote"
-+	prog=$1
-+	shutdown_prog=$prog
-+
-+	if status $shutdown_prog > /dev/null 2>&1; then
-+	    kill -TERM $(pidof $prog) > /dev/null 2>&1
-+
-+	    while status $prog > /dev/null 2>&1; do
-+		sleep 1
-+		echo -n "."
-+	    done
-+	else
-+	    echo -n "$desc is already stopped"
-+	fi
-+
-+	rm -f /var/lock/subsystem/pacemaker
-+	rm -f /var/run/${prog}.pid
-+	killall -q -9 'crmd stonithd attrd cib lrmd pacemakerd pacemaker_remoted'
-+}
-+
-+stop "pacemaker_remoted"
-+exit 0
-diff --git a/extra/ansible/docker/roles/docker-host/files/pcmk_start \
                b/extra/ansible/docker/roles/docker-host/files/pcmk_start
-new file mode 100644
-index 0000000..d8b2ba8
---- /dev/null
-+++ b/extra/ansible/docker/roles/docker-host/files/pcmk_start
-@@ -0,0 +1,23 @@
-+#!/bin/bash
-+
-+/usr/sbin/ip_start
-+sed -i 's@to_syslog:.*yes@to_logfile: yes\nlogfile: /var/log/pacemaker.log@g' \
                /etc/corosync/corosync.conf
-+
-+/usr/share/corosync/corosync start > /dev/null 2>&1
-+
-+pid=$(pidof pacemakerd)
-+if [ "$?" -ne 0 ];  then
-+	mkdir -p /var/run
-+
-+	export PCMK_debugfile=$pcmklogs
-+	(pacemakerd &) & > /dev/null 2>&1
-+	sleep 5
-+
-+	pid=$(pidof pacemakerd)
-+	if [ "$?" -ne 0 ]; then
-+		echo "startup of pacemaker failed"
-+		exit 1
-+	fi
-+	echo "$pid" > /var/run/pacemakerd.pid
-+fi
-+exit 0
-diff --git a/extra/ansible/docker/roles/docker-host/files/pcmk_stop \
                b/extra/ansible/docker/roles/docker-host/files/pcmk_stop
-new file mode 100644
-index 0000000..a8f395a
---- /dev/null
-+++ b/extra/ansible/docker/roles/docker-host/files/pcmk_stop
-@@ -0,0 +1,45 @@
-+#!/bin/bash
-+status()
-+{
-+	pid=$(pidof $1 2>/dev/null)
-+	rtrn=$?
-+	if [ $rtrn -ne 0 ]; then
-+		echo "$1 is stopped"
-+	else
-+		echo "$1 (pid $pid) is running..."
-+	fi
-+	return $rtrn
-+}
-+stop()
-+{
-+	desc="Pacemaker Cluster Manager"
-+	prog=$1
-+	shutdown_prog=$prog
-+
-+	if ! status $prog > /dev/null 2>&1; then
-+	    shutdown_prog="crmd"
-+	fi
-+
-+	cname=$(crm_node --name)
-+	crm_attribute -N $cname -n standby -v true -l reboot
-+
-+	if status $shutdown_prog > /dev/null 2>&1; then
-+	    kill -TERM $(pidof $prog) > /dev/null 2>&1
-+
-+	    while status $prog > /dev/null 2>&1; do
-+		sleep 1
-+		echo -n "."
-+	    done
-+	else
-+	    echo -n "$desc is already stopped"
-+	fi
-+
-+	rm -f /var/lock/subsystem/pacemaker
-+	rm -f /var/run/${prog}.pid
-+	killall -q -9 'crmd stonithd attrd cib lrmd pacemakerd pacemaker_remoted'
-+}
-+
-+stop "pacemakerd"
-+/usr/share/corosync/corosync stop > /dev/null 2>&1
-+killall -q -9 'corosync'
-+exit 0
-diff --git a/extra/ansible/docker/roles/docker-host/tasks/main.yml \
                b/extra/ansible/docker/roles/docker-host/tasks/main.yml
-new file mode 100644
-index 0000000..ce69adf
---- /dev/null
-+++ b/extra/ansible/docker/roles/docker-host/tasks/main.yml
-@@ -0,0 +1,77 @@
-+---
-+#local_action: command /usr/bin/take_out_of_pool {{ inventory_hostname }}
-+- name: Update docker
-+  yum: pkg=docker state=latest
-+- name: Start docker
-+  service: name=docker state=started enabled=yes
-+- name: Install helper
-+  copy: src=docker-enter dest=/usr/sbin/ mode55
-+- name: Download image
-+  shell: docker pull {{ base_image }}
-+- name: Cleanup kill
-+  shell: docker kill $(docker ps -a | grep {{ prefix }} | awk '{print $1}') || echo \
                "Nothing to kill"
-+- name: Cleanup remove
-+  shell: docker rm $(docker ps -a | grep {{ prefix }} | awk '{print $1}') || echo \
                "Nothing to remove"
-+- name: Cleanup docker skeleton
-+  file: path={{ prefix }} state«sent
-+- name: Create docker skeleton
-+  file: path={{ prefix }}/{{ item }} state=directory recurse=yes
-+  with_items:
-+  - rpms
-+  - repos
-+  - bin_files
-+  - launch_scripts
-+- name: Create IP helper
-+  template: src=ip_start.j2 dest={{ prefix }}/bin_files/ip_start mode55
-+- name: Copy helper scripts
-+  copy: src={{ item }} dest={{ prefix }}/bin_files/{{ item }} mode55
-+  with_items:
-+  - pcmk_stop
-+  - pcmk_start
-+  - pcmk_remote_stop
-+  - pcmk_remote_start
-+  - fence_docker_cts
-+- name: Copy launch script
-+  copy: src=launch.sh dest={{ prefix }}/launch_scripts/launch.sh mode55
-+- name: Copy authorized keys
-+  shell: cp /root/.ssh/authorized_keys {{ prefix }}
-+- name: Create docker file
-+  template: src=Dockerfile.j2 dest={{ prefix }}/Dockerfile
-+- name: Making image
-+  shell: docker build -t {{ prefix }} {{ prefix }}
-+- name: Launch images
-+  shell: docker run -d -i -t -P -h {{ prefix }}-{{ item }} --name={{ prefix }}-{{ \
item }} -p 2200{{ item }}:22 $(docker images | grep {{ prefix }}.*latest | awk \
                '{print $3}') /bin/bash
-+  with_sequence: count={{ max }}
-+- name: Calculate IPs
-+  shell: for n in $(seq {{ max }} ); do echo {{ subnet }}.${n}; done | tr '\n' ' '
-+  register: node_ips
-+- name: Start the IP
-+  shell: docker-enter {{ prefix }}-{{ item }} ip_start
-+  with_sequence: count={{ max }}
-+- name: Configure cluster
-+  shell: docker-enter {{ prefix }}-{{ item }} pcs cluster setup --local --name {{ \
                prefix }} {{ node_ips.stdout }}
-+  with_sequence: count={{ max }}
-+- name: Start the cluster
-+  shell: docker-enter {{ prefix }}-{{ item }} pcmk_start
-+  with_sequence: count={{ max }}
-+- name: Set cluster options
-+  shell: docker-enter {{ prefix }}-1 pcs property set stonith-enabledúlse
-+- name: Configure VIP
-+  shell: docker-enter {{ prefix }}-1 pcs resource create ClusterIP \
                ocf:heartbeat:IPaddr2 ip={{ subnet }}.100 cidr_netmask2 op monitor \
                interval0s
-+- name: Configure
-+  shell: docker-enter {{ prefix }}-1 pcs resource defaults resource-stickiness0
-+- name: Configure
-+  shell: docker-enter {{ prefix }}-1 pcs resource create WebSite apache \
configfile=/etc/httpd/conf/httpd.conf statusurl="http://localhost/server-status" op \
                monitor interval=1min
-+- name: Configure
-+  shell: docker-enter {{ prefix }}-1 pcs constraint colocation add WebSite with \
                ClusterIP INFINITY
-+- name: Configure
-+  shell: docker-enter {{ prefix }}-1 pcs constraint order ClusterIP then WebSite
-+- name: Configure
-+  shell: docker-enter {{ prefix }}-1 pcs constraint location WebSite prefers {{ \
                prefix }}-1P
-+# TODO: Enable fencing
-+# TODO: Make this a full LAMP stack similar to \
                https://github.com/ansible/ansible-examples/tree/master/lamp_simple
-+# TODO: Create a Pacemaker module?
-+
-+#  run_once: true
-+#  delegate_to: web01.example.org
-+
-diff --git a/extra/ansible/docker/roles/docker-host/templates/Dockerfile.j2 \
                b/extra/ansible/docker/roles/docker-host/templates/Dockerfile.j2
-new file mode 100644
-index 0000000..1d57175
---- /dev/null
-+++ b/extra/ansible/docker/roles/docker-host/templates/Dockerfile.j2
-@@ -0,0 +1,16 @@
-+FROM {{ base_image }}
-+ADD /repos /etc/yum.repos.d/
-+#ADD /rpms /root/
-+#RUN yum install -y /root/*.rpm
-+ADD /launch_scripts /root/
-+ADD /bin_files /usr/sbin/
-+
-+RUN mkdir -p /root/.ssh; chmod 700 /root/.ssh
-+ADD authorized_keys /root/.ssh/
-+
-+RUN yum install -y openssh-server net-tools pacemaker pacemaker-cts resource-agents \
                pcs corosync which fence-agents-common sysvinit-tools
-+RUN mkdir -p /etc/pacemaker/
-+RUN echo {{ pacemaker_authkey }} > /etc/pacemaker/authkey
-+RUN /usr/sbin/sshd
-+
-+ENTRYPOINT ["/root/launch.sh"]
-diff --git a/extra/ansible/docker/roles/docker-host/templates/ip_start.j2 \
                b/extra/ansible/docker/roles/docker-host/templates/ip_start.j2
-new file mode 100755
-index 0000000..edbd392
---- /dev/null
-+++ b/extra/ansible/docker/roles/docker-host/templates/ip_start.j2
-@@ -0,0 +1,3 @@
-+offset=$(hostname | sed s/.*-//)
-+export OCF_ROOT=/usr/lib/ocf/ OCF_RESKEY_ip={{ subnet }}.${offset} \
                OCF_RESKEY_cidr_netmask2
-+/usr/lib/ocf/resource.d/heartbeat/IPaddr2 start
-diff --git a/extra/ansible/docker/site.yml b/extra/ansible/docker/site.yml
-new file mode 100644
-index 0000000..0cc65e4
---- /dev/null
-+++ b/extra/ansible/docker/site.yml
-@@ -0,0 +1,12 @@
-+---
-+# See /etc/ansible/hosts or -i hosts
-+- hosts: controllers
-+  remote_user: root
-+  roles:
-+    - docker-host
-+
-+#- hosts: containers
-+#  gather_facts: no
-+#  remote_user: root
-+#  roles:
-+#    - docker-container
-diff --git a/include/crm/msg_xml.h b/include/crm/msg_xml.h
-index 42f9003..15f1b3c 100644
---- a/include/crm/msg_xml.h
-+++ b/include/crm/msg_xml.h
-@@ -194,6 +194,7 @@
- #  define XML_RSC_ATTR_INTERLEAVE	"interleave"
- #  define XML_RSC_ATTR_INCARNATION	"clone"
- #  define XML_RSC_ATTR_INCARNATION_MAX	"clone-max"
-+#  define XML_RSC_ATTR_INCARNATION_MIN	"clone-min"
- #  define XML_RSC_ATTR_INCARNATION_NODEMAX	"clone-node-max"
- #  define XML_RSC_ATTR_MASTER_MAX	"master-max"
- #  define XML_RSC_ATTR_MASTER_NODEMAX	"master-node-max"
-diff --git a/include/crm/pengine/status.h b/include/crm/pengine/status.h
-index 4214959..b95b1e5 100644
---- a/include/crm/pengine/status.h
-+++ b/include/crm/pengine/status.h
-@@ -256,7 +256,6 @@ struct resource_s {
-     int stickiness;
-     int sort_index;
-     int failure_timeout;
--    int remote_reconnect_interval;
-     int effective_priority;
-     int migration_threshold;
-
-@@ -295,6 +294,7 @@ struct resource_s {
-
-     const char *isolation_wrapper;
-     gboolean exclusive_discover;
-+    int remote_reconnect_interval;
- };
-
- struct pe_action_s {
-@@ -324,6 +324,26 @@ struct pe_action_s {
-     GHashTable *meta;
-     GHashTable *extra;
-
-+    /*
-+     * These two varables are associated with the constraint logic
-+     * that involves first having one or more actions runnable before
-+     * then allowing this action to execute.
-+     *
-+     * These varables are used with features such as 'clone-min' which
-+     * requires at minimum X number of cloned instances to be running
-+     * before an order dependency can run. Another option that uses
-+     * this is 'require-allúlse' in ordering constrants. This option
-+     * says "only required one instance of a resource to start before
-+     * allowing dependencies to start" basicall require-allúlse is
-+     * the same as clone-min=1.
-+     */
-+
-+    /* current number of known runnable actions in the before list. */
-+    int runnable_before;
-+    /* the number of "before" runnable actions required for this action
-+     * to be considered runnable */
-+    int required_runnable_before;
-+
-     GListPtr actions_before;    /* action_warpper_t* */
-     GListPtr actions_after;     /* action_warpper_t* */
- };
-diff --git a/lib/cib/Makefile.am b/lib/cib/Makefile.am
-index e84f4f7..1e50511 100644
---- a/lib/cib/Makefile.am
-+++ b/lib/cib/Makefile.am
-@@ -28,7 +28,7 @@ noinst_HEADERS		- libcib_la_SOURCES	= cib_ops.c cib_utils.c \
                cib_client.c cib_native.c cib_attrs.c
- libcib_la_SOURCES      += cib_file.c cib_remote.c
-
--libcib_la_LDFLAGS	= -version-info 4:1:0 -L$(top_builddir)/lib/pengine/.libs
-+libcib_la_LDFLAGS	= -version-info 4:2:0 -L$(top_builddir)/lib/pengine/.libs
- libcib_la_LIBADD        = $(CRYPTOLIB) $(top_builddir)/lib/pengine/libpe_rules.la \
                $(top_builddir)/lib/common/libcrmcommon.la
- libcib_la_CFLAGS	= -I$(top_srcdir)
-
-diff --git a/lib/cluster/Makefile.am b/lib/cluster/Makefile.am
-index 29413ba..29daeb2 100644
---- a/lib/cluster/Makefile.am
-+++ b/lib/cluster/Makefile.am
-@@ -28,7 +28,7 @@ header_HEADERS - lib_LTLIBRARIES	= libcrmcluster.la
-
- libcrmcluster_la_SOURCES = election.c cluster.c membership.c
--libcrmcluster_la_LDFLAGS = -version-info 4:2:0 $(CLUSTERLIBS)
-+libcrmcluster_la_LDFLAGS = -version-info 5:0:1 $(CLUSTERLIBS)
- libcrmcluster_la_LIBADD  = $(top_builddir)/lib/common/libcrmcommon.la \
                $(top_builddir)/lib/fencing/libstonithd.la
- libcrmcluster_la_DEPENDENCIES = $(top_builddir)/lib/common/libcrmcommon.la \
                $(top_builddir)/lib/fencing/libstonithd.la
-
-diff --git a/lib/common/Makefile.am b/lib/common/Makefile.am
-index a593f40..f5c0766 100644
---- a/lib/common/Makefile.am
-+++ b/lib/common/Makefile.am
-@@ -37,7 +37,7 @@ if BUILD_CIBSECRETS
- libcrmcommon_la_SOURCES	+= cib_secrets.c
- endif
-
--libcrmcommon_la_LDFLAGS	= -version-info 7:0:4
-+libcrmcommon_la_LDFLAGS	= -version-info 8:0:5
- libcrmcommon_la_LIBADD  = @LIBADD_DL@ $(GNUTLSLIBS)
- libcrmcommon_la_SOURCES += $(top_builddir)/lib/gnu/md5.c
-
-diff --git a/lib/fencing/Makefile.am b/lib/fencing/Makefile.am
-index 2bdcfeb..fbe02e4 100644
---- a/lib/fencing/Makefile.am
-+++ b/lib/fencing/Makefile.am
-@@ -25,7 +25,7 @@ AM_CPPFLAGS         = -I$(top_builddir)/include  \
                -I$(top_srcdir)/include     \
- lib_LTLIBRARIES = libstonithd.la
-
- libstonithd_la_SOURCES = st_client.c
--libstonithd_la_LDFLAGS = -version-info 3:2:1
-+libstonithd_la_LDFLAGS = -version-info 3:3:1
- libstonithd_la_LIBADD = $(top_builddir)/lib/common/libcrmcommon.la
-
- AM_CFLAGS = $(AM_CPPFLAGS)
-diff --git a/lib/lrmd/Makefile.am b/lib/lrmd/Makefile.am
-index f961ae1..820654c 100644
---- a/lib/lrmd/Makefile.am
-+++ b/lib/lrmd/Makefile.am
-@@ -25,7 +25,7 @@ AM_CPPFLAGS         = -I$(top_builddir)/include  \
                -I$(top_srcdir)/include     \
- lib_LTLIBRARIES = liblrmd.la
-
- liblrmd_la_SOURCES = lrmd_client.c proxy_common.c
--liblrmd_la_LDFLAGS = -version-info 3:0:2
-+liblrmd_la_LDFLAGS = -version-info 3:1:2
- liblrmd_la_LIBADD = $(top_builddir)/lib/common/libcrmcommon.la	\
- 			$(top_builddir)/lib/services/libcrmservice.la \
- 			$(top_builddir)/lib/fencing/libstonithd.la
-diff --git a/lib/pengine/Makefile.am b/lib/pengine/Makefile.am
-index 78da075..60d1770 100644
---- a/lib/pengine/Makefile.am
-+++ b/lib/pengine/Makefile.am
-@@ -26,11 +26,11 @@ lib_LTLIBRARIES	= libpe_rules.la libpe_status.la
- ## SOURCES
- noinst_HEADERS	= unpack.h variant.h
-
--libpe_rules_la_LDFLAGS	= -version-info 2:4:0
-+libpe_rules_la_LDFLAGS	= -version-info 2:5:0
- libpe_rules_la_SOURCES	= rules.c common.c
- libpe_rules_la_LIBADD	= $(top_builddir)/lib/common/libcrmcommon.la
-
--libpe_status_la_LDFLAGS	= -version-info 8:0:4
-+libpe_status_la_LDFLAGS	= -version-info 9:0:5
- libpe_status_la_SOURCES	=  status.c unpack.c utils.c complex.c native.c group.c \
                clone.c rules.c common.c
- libpe_status_la_LIBADD	=  @CURSESLIBS@ $(top_builddir)/lib/common/libcrmcommon.la
-
-diff --git a/lib/services/dbus.c b/lib/services/dbus.c
-index 6341fc5..e2efecb 100644
---- a/lib/services/dbus.c
-+++ b/lib/services/dbus.c
-@@ -64,11 +64,14 @@ pcmk_dbus_find_error(const char *method, DBusPendingCall* \
                pending, DBusMessage *
-     } else {
-         DBusMessageIter args;
-         int dtype = dbus_message_get_type(reply);
-+        char *sig;
-
-         switch(dtype) {
-             case DBUS_MESSAGE_TYPE_METHOD_RETURN:
-                 dbus_message_iter_init(reply, &args);
--                crm_trace("Call to %s returned '%s'", method, \
                dbus_message_iter_get_signature(&args));
-+                sig = dbus_message_iter_get_signature(&args);
-+                crm_trace("Call to %s returned '%s'", method, sig);
-+                dbus_free(sig);
-                 break;
-             case DBUS_MESSAGE_TYPE_INVALID:
-                 error.message = "Invalid reply";
-@@ -217,11 +220,14 @@ bool pcmk_dbus_type_check(DBusMessage *msg, DBusMessageIter \
                *field, int expected
-
-     if(dtype != expected) {
-         DBusMessageIter args;
-+        char *sig;
-
-         dbus_message_iter_init(msg, &args);
-+        sig = dbus_message_iter_get_signature(&args);
-         do_crm_log_alias(LOG_ERR, __FILE__, function, line,
--                         "Unexepcted DBus type, expected %c in '%s' instead of %c",
--                         expected, dbus_message_iter_get_signature(&args), dtype);
-+                         "Unexpected DBus type, expected %c in '%s' instead of %c",
-+                         expected, sig, dtype);
-+        dbus_free(sig);
-         return FALSE;
-     }
-
-diff --git a/lib/services/services.c b/lib/services/services.c
-index 08bff88..7e2b9f7 100644
---- a/lib/services/services.c
-+++ b/lib/services/services.c
-@@ -348,6 +348,34 @@ services_action_create_generic(const char *exec, const char \
                *args[])
-     return op;
- }
-
-+#if SUPPORT_DBUS
-+/*
-+ * \internal
-+ * \brief Update operation's pending DBus call, unreferencing old one if needed
-+ *
-+ * \param[in,out] op       Operation to modify
-+ * \param[in]     pending  Pending call to set
-+ */
-+void
-+services_set_op_pending(svc_action_t *op, DBusPendingCall *pending)
-+{
-+    if (op->opaque->pending && (op->opaque->pending != pending)) {
-+        if (pending) {
-+            crm_info("Lost pending DBus call (%p)", op->opaque->pending);
-+        } else {
-+            crm_trace("Done with pending DBus call (%p)", op->opaque->pending);
-+        }
-+        dbus_pending_call_unref(op->opaque->pending);
-+    }
-+    op->opaque->pending = pending;
-+    if (pending) {
-+        crm_trace("Updated pending DBus call (%p)", pending);
-+    } else {
-+        crm_trace("Cleared pending DBus call");
-+    }
-+}
-+#endif
-+
- void
- services_action_cleanup(svc_action_t * op)
- {
-diff --git a/lib/services/services_private.h b/lib/services/services_private.h
-index 183afb5..a98cd91 100644
---- a/lib/services/services_private.h
-+++ b/lib/services/services_private.h
-@@ -63,4 +63,8 @@ void handle_blocked_ops(void);
-
- gboolean is_op_blocked(const char *rsc);
-
-+#if SUPPORT_DBUS
-+void services_set_op_pending(svc_action_t *op, DBusPendingCall *pending);
-+#endif
-+
- #endif                          /* __MH_SERVICES_PRIVATE_H__ */
-diff --git a/lib/services/systemd.c b/lib/services/systemd.c
-index 749d61c..e1e1bc9 100644
---- a/lib/services/systemd.c
-+++ b/lib/services/systemd.c
-@@ -461,7 +461,12 @@ systemd_async_dispatch(DBusPendingCall *pending, void \
                *user_data)
-
-     if(op) {
-         crm_trace("Got result: %p for %p for %s, %s", reply, pending, op->rsc, \
                op->action);
--        op->opaque->pending = NULL;
-+        if (pending == op->opaque->pending) {
-+            op->opaque->pending = NULL;
-+        } else {
-+            crm_info("Received unexpected reply for pending DBus call (%p vs %p)",
-+                     op->opaque->pending, pending);
-+        }
-         systemd_exec_result(reply, op);
-
-     } else {
-@@ -499,10 +504,7 @@ systemd_unit_check(const char *name, const char *state, void \
                *userdata)
-     }
-
-     if (op->synchronous == FALSE) {
--        if (op->opaque->pending) {
--            dbus_pending_call_unref(op->opaque->pending);
--        }
--        op->opaque->pending = NULL;
-+        services_set_op_pending(op, NULL);
-         operation_finalize(op);
-     }
- }
-@@ -535,7 +537,7 @@ systemd_unit_exec_with_unit(svc_action_t * op, const char *unit)
-             return op->rc == PCMK_OCF_OK;
-         } else if (pending) {
-             dbus_pending_call_ref(pending);
--            op->opaque->pending = pending;
-+            services_set_op_pending(op, pending);
-             return TRUE;
-         }
-
-@@ -617,8 +619,7 @@ systemd_unit_exec_with_unit(svc_action_t * op, const char *unit)
-
-         dbus_message_unref(msg);
-         if(pending) {
--            dbus_pending_call_ref(pending);
--            op->opaque->pending = pending;
-+            services_set_op_pending(op, pending);
-             return TRUE;
-         }
-         return FALSE;
-diff --git a/lib/transition/Makefile.am b/lib/transition/Makefile.am
-index 8ce7775..04d18fe 100644
---- a/lib/transition/Makefile.am
-+++ b/lib/transition/Makefile.am
-@@ -27,7 +27,7 @@ lib_LTLIBRARIES	= libtransitioner.la
- noinst_HEADERS		- libtransitioner_la_SOURCES	= unpack.c graph.c utils.c
-
--libtransitioner_la_LDFLAGS	= -version-info 2:3:0
-+libtransitioner_la_LDFLAGS	= -version-info 2:4:0
- libtransitioner_la_CFLAGS	= -I$(top_builddir)
- libtransitioner_la_LIBADD       = $(top_builddir)/lib/common/libcrmcommon.la
-
-diff --git a/pengine/Makefile.am b/pengine/Makefile.am
-index 31532cf..0e12a1f 100644
---- a/pengine/Makefile.am
-+++ b/pengine/Makefile.am
-@@ -61,7 +61,7 @@ endif
- noinst_HEADERS	= allocate.h utils.h pengine.h
- #utils.h pengine.h
-
--libpengine_la_LDFLAGS	= -version-info 8:0:4
-+libpengine_la_LDFLAGS	= -version-info 9:0:5
- # -L$(top_builddir)/lib/pils -lpils -export-dynamic -module -avoid-version
- libpengine_la_SOURCES	= pengine.c allocate.c utils.c constraints.c
- libpengine_la_SOURCES  += native.c group.c clone.c master.c graph.c utilization.c
-diff --git a/pengine/allocate.c b/pengine/allocate.c
-index 68cafd4..ec5a18d 100644
---- a/pengine/allocate.c
-+++ b/pengine/allocate.c
-@@ -1962,7 +1962,6 @@ expand_node_list(GListPtr list)
-             if(node_list) {
-                 existing_len = strlen(node_list);
-             }
--
-             crm_trace("Adding %s (%dc) at offset %d", node->details->uname, len - \
                2, existing_len);
-             node_list = realloc_safe(node_list, len + existing_len);
-             sprintf(node_list + existing_len, "%s%s", existing_len == 0 ? "":" ", \
                node->details->uname);
-diff --git a/pengine/allocate.h b/pengine/allocate.h
-index f6602c6..73f750e 100644
---- a/pengine/allocate.h
-+++ b/pengine/allocate.h
-@@ -171,5 +171,6 @@ extern enum pe_graph_flags clone_update_actions(action_t * \
                first, action_t * the
-                                                 enum pe_action_flags filter, enum \
                pe_ordering type);
-
- gboolean update_action_flags(action_t * action, enum pe_action_flags flags);
-+gboolean update_action(action_t * action);
-
- #endif
-diff --git a/pengine/clone.c b/pengine/clone.c
-index 3840a0a..ebf53ed 100644
---- a/pengine/clone.c
-+++ b/pengine/clone.c
-@@ -21,6 +21,7 @@
- #include <crm/msg_xml.h>
- #include <allocate.h>
- #include <utils.h>
-+#include <allocate.h>
-
- #define VARIANT_CLONE 1
- #include <lib/pengine/variant.h>
-@@ -1338,6 +1339,8 @@ clone_update_actions(action_t * first, action_t * then, node_t \
                * node, enum pe_a
-         changed |= native_update_actions(first, then, node, flags, filter, type);
-
-         for (; gIter != NULL; gIter = gIter->next) {
-+            enum pe_graph_flags child_changed = pe_graph_none;
-+            GListPtr lpc = NULL;
-             resource_t *child = (resource_t *) gIter->data;
-             action_t *child_action = find_first_action(child->actions, NULL, \
                then->task, node);
-
-@@ -1345,9 +1348,17 @@ clone_update_actions(action_t * first, action_t * then, \
                node_t * node, enum pe_a
-                 enum pe_action_flags child_flags = \
                child->cmds->action_flags(child_action, node);
-
-                 if (is_set(child_flags, pe_action_runnable)) {
--                    changed |-+
-+                    child_changed |-                         \
                child->cmds->update_actions(first, child_action, node, flags, filter, \
                type);
-                 }
-+                changed |= child_changed;
-+                if (child_changed & pe_graph_updated_then) {
-+                   for (lpc = child_action->actions_after; lpc != NULL; lpc = \
                lpc->next) {
-+                        action_wrapper_t *other = (action_wrapper_t *) lpc->data;
-+                        update_action(other->action);
-+                    }
-+                }
-             }
-         }
-     }
-diff --git a/pengine/constraints.c b/pengine/constraints.c
-index 1f44811..7527aa6 100644
---- a/pengine/constraints.c
-+++ b/pengine/constraints.c
-@@ -256,7 +256,7 @@ unpack_simple_rsc_order(xmlNode * xml_obj, pe_working_set_t * \
                data_set)
-     resource_t *rsc_then = NULL;
-     resource_t *rsc_first = NULL;
-     gboolean invert_bool = TRUE;
--    gboolean require_all = TRUE;
-+    int min_required_before = 0;
-     enum pe_order_kind kind = pe_order_kind_mandatory;
-     enum pe_ordering cons_weight = pe_order_optional;
-
-@@ -351,7 +351,15 @@ unpack_simple_rsc_order(xmlNode * xml_obj, pe_working_set_t * \
                data_set)
-         && crm_is_true(require_all_s) == FALSE
-         && rsc_first->variant >= pe_clone) {
-
--        require_all = FALSE;
-+        /* require-allúlse means only one instance of the clone is required */
-+        min_required_before = 1;
-+    } else if (rsc_first->variant >= pe_clone) {
-+        const char *min_clones_s = g_hash_table_lookup(rsc_first->meta, \
                XML_RSC_ATTR_INCARNATION_MIN);
-+        if (min_clones_s) {
-+            /* if clone min is set, we require at a minimum X number of instances
-+             * to be runnable before allowing dependencies to be runnable. */
-+            min_required_before = crm_parse_int(min_clones_s, "0");
-+        }
-     }
-
-     cons_weight = pe_order_optional;
-@@ -368,22 +376,31 @@ unpack_simple_rsc_order(xmlNode * xml_obj, pe_working_set_t * \
                data_set)
-         cons_weight |= get_flags(id, kind, action_first, action_then, FALSE);
-     }
-
--    if (require_all == FALSE) {
-+    /* If there is a minimum number of instances that must be runnable before
-+     * the 'then' action is runnable, we use a pseudo action as an intermediate \
                step
-+     * start min number of clones -> pseudo action is runnable -> dependency \
                runnable. */
-+    if (min_required_before) {
-         GListPtr rIter = NULL;
-         char *task = crm_concat(CRM_OP_RELAXED_CLONE, id, ':');
-         action_t *unordered_action = get_pseudo_op(task, data_set);
-         free(task);
-
-+        /* require the pseudo action to have "min_required_before" number of
-+         * actions to be considered runnable before allowing the pseudo action
-+         * to be runnable. */
-+        unordered_action->required_runnable_before = min_required_before;
-         update_action_flags(unordered_action, pe_action_requires_any);
-
-         for (rIter = rsc_first->children; id && rIter; rIter = rIter->next) {
-             resource_t *child = rIter->data;
--
-+            /* order each clone instance before the pseudo action */
-             custom_action_order(child, generate_op_key(child->id, action_first, 0), \
                NULL,
-                                 NULL, NULL, unordered_action,
-                                 pe_order_one_or_more | \
                pe_order_implies_then_printed, data_set);
-         }
-
-+        /* order the "then" dependency to occur after the pseudo action only if
-+         * the pseudo action is runnable */
-         order_id = custom_action_order(NULL, NULL, unordered_action,
-                        rsc_then, generate_op_key(rsc_then->id, action_then, 0), \
                NULL,
-                        cons_weight | pe_order_runnable_left, data_set);
-diff --git a/pengine/graph.c b/pengine/graph.c
-index 9cfede6..3d832f0 100644
---- a/pengine/graph.c
-+++ b/pengine/graph.c
-@@ -29,7 +29,6 @@
- #include <allocate.h>
- #include <utils.h>
-
--gboolean update_action(action_t * action);
- void update_colo_start_chain(action_t * action);
- gboolean rsc_update_action(action_t * first, action_t * then, enum pe_ordering \
                type);
-
-@@ -261,8 +260,16 @@ graph_update_action(action_t * first, action_t * then, node_t * \
                node, enum pe_ac
-                                                 pe_action_runnable, \
                pe_order_one_or_more);
-
-         } else if (is_set(flags, pe_action_runnable)) {
--            if (update_action_flags(then, pe_action_runnable)) {
--                changed |= pe_graph_updated_then;
-+            /* alright. a "first" action is considered runnable, incremente
-+             * the 'runnable_before' counter */
-+            then->runnable_before++;
-+
-+            /* if the runnable before count for then exceeds the required number
-+             * of "before" runnable actions... mark then as runnable */
-+            if (then->runnable_before >= then->required_runnable_before) {
-+                if (update_action_flags(then, pe_action_runnable)) {
-+                    changed |= pe_graph_updated_then;
-+                }
-             }
-         }
-         if (changed) {
-@@ -456,6 +463,18 @@ update_action(action_t * then)
-                      pe_action_pseudo) ? "pseudo" : then->node ? \
                then->node->details->uname : "");
-
-     if (is_set(then->flags, pe_action_requires_any)) {
-+        /* initialize current known runnable before actions to 0
-+         * from here as graph_update_action is called for each of
-+         * then's before actions, this number will increment as
-+         * runnable 'first' actions are encountered */
-+        then->runnable_before = 0;
-+
-+        /* for backwards compatibility with previous options that use
-+         * the 'requires_any' flag, initalize required to 1 if it is
-+         * not set. */
-+        if (then->required_runnable_before == 0) {
-+            then->required_runnable_before = 1;
-+        }
-         clear_bit(then->flags, pe_action_runnable);
-         /* We are relying on the pe_order_one_or_more clause of
-          * graph_update_action(), called as part of the:
-diff --git a/pengine/native.c b/pengine/native.c
-index b93f8da..7d5f602 100644
---- a/pengi


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic