[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-lvm
Subject:    [linux-lvm]  auto_activation_volume_list in lvm.conf not honored
From:       Stefan_Bauer <sb () plzk ! de>
Date:       2016-12-02 7:07:10
Message-ID: kcim.58411d9e.479a.7acdfc557e9c39cb () ucs ! cubewerk ! intern
[Download RAW message or body]

Tried now clvmd to set it up but fails with

Dec  2 07:09:46 vm1 LVM(vg)[1278]: ERROR: connect() failed on local socket: No such \
file or directory Internal cluster locking initialisation failed. WARNING: Falling \
back to local file-based locking. Volume Groups with the clustered attribute will be \
inaccessible. Reading all physical volumes. This may take a while... Found volume \
                group "vm1-vg" using metadata type lvm2 Skipping clustered volume \
                group vg
Dec  2 07:09:46 vm1 LVM(vg)[1278]: ERROR: connect() failed on local socket: No such \
file or directory Internal cluster locking initialisation failed. WARNING: Falling \
back to local file-based locking. Volume Groups with the clustered attribute will be \
                inaccessible. Skipping clustered volume group vg
Dec  2 07:09:46 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation vg_start_0 \
                (call=22, rc=1, cib-update=26, confirmed=true) unknown error
Dec  2 07:09:46 vm1 crmd[1075]:  warning: status_from_rc: Action 23 (vg_start_0) on \
                vm1 failed (target: 0 vs. rc: 1): Error
Dec  2 07:09:46 vm1 crmd[1075]:  warning: update_failcount: Updating failcount for vg \
                on vm1 after failed start: rc=1 (update=INFINITY, time=1480658986)
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_trigger_update: Sending flush op to \
all hosts for: fail-count-vg (INFINITY)

If i do a cleanup of the ressource - it is started.

syslog attached.


Any help is greatly appreciated.

Thank you.

Stefan 
-----Ursprüngliche Nachricht-----
> Von:David Teigland <teigland@redhat.com>
> Gesendet: Mon 28 November 2016 23:14
> An: Stefan Bauer <sb@plzk.de>
> CC: linux-lvm@redhat.com
> Betreff: Re: [linux-lvm] auto_activation_volume_list in lvm.conf not honored
> 
> On Fri, Nov 25, 2016 at 10:30:39AM +0100, Zdenek Kabelac wrote:
> > Dne 25.11.2016 v 10:17 Stefan Bauer napsal(a):
> > > Hi Peter,
> > > 
> > > as i said, we have master/slave setup _without_ concurrent write/read. So i do \
> > > not see a reason why i should take care of locking as only one node is \
> > > activating the volume group at the same time. 
> > > That should be fine - right?
> > 
> > Nope it's not.
> > 
> > Every i.e. activation  DOES validation of all resources and takes ACTION
> > when something is wrong.
> > 
> > Sorry, but there is NO way to do this properly without locking manager.
> > 
> > Although many lvm2 users always do try to be 'innovative' and try to use in
> > lock-less way - this seems to work most of the time - till the moment some
> > disaster happens - then just lvm2 is blamed about data loss..
> > 
> > Interestingly they never tried to think why we invested so much time into
> > locking manager when there is such 'easy-fix' in their eyes...
> > 
> > IMHO lvmlockd is relatively 'low-resource/overhead' solution worth to be
> > explored if you don't like clvmd...
> 
> Stefan, as Zdenek points out, even reading VGs on shared storage is not
> entirely safe, because lvm may attempt to fix/repair things on disk while
> it is reading (this becomes more likely if one machine reads while another
> is making changes).  Using some kind of locking or clustering (lvmlockd or
> clvm) is a solution.
> 
> Another fairly new option is to use "system ID", which assigns one host as
> the owner of the VG.  This avoids the problems mentioned above with
> reading->fixing.  But, system ID on its own cannot be used dynamically.
> If you want to fail-over the VG between hosts, the system ID needs to be
> changed, and this needs to be done carefully (e.g. by a resource manager
> or something that takes fencing into account,
> https://bugzilla.redhat.com/show_bug.cgi?id=1336346#c2)
> 
> Also https://www.redhat.com/archives/linux-lvm/2016-November/msg00022.html
> 
> Dave
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 


["syslog.txt" (text/plain)]

Dec  2 07:09:23 vm1 corosync[1020]:   [TOTEM ] The network interface \
                [192.168.122.175] is now up.
Dec  2 07:09:23 vm1 corosync[1020]:   [SERV  ] Service engine loaded: corosync \
                configuration map access [0]
Dec  2 07:09:23 vm1 corosync[1020]:   [QB    ] server name: cmap
Dec  2 07:09:23 vm1 corosync[1020]:   [SERV  ] Service engine loaded: corosync \
                configuration service [1]
Dec  2 07:09:23 vm1 corosync[1020]:   [QB    ] server name: cfg
Dec  2 07:09:23 vm1 corosync[1020]:   [SERV  ] Service engine loaded: corosync \
                cluster closed process group service v1.01 [2]
Dec  2 07:09:23 vm1 corosync[1020]:   [QB    ] server name: cpg
Dec  2 07:09:23 vm1 corosync[1020]:   [SERV  ] Service engine loaded: corosync \
                profile loading service [4]
Dec  2 07:09:23 vm1 corosync[1020]:   [WD    ] No Watchdog, try modprobe <a watchdog>
Dec  2 07:09:23 vm1 corosync[1020]:   [WD    ] no resources configured.
Dec  2 07:09:23 vm1 corosync[1020]:   [SERV  ] Service engine loaded: corosync \
                watchdog service [7]
Dec  2 07:09:23 vm1 corosync[1020]:   [QUORUM] Using quorum provider \
                corosync_votequorum
Dec  2 07:09:23 vm1 corosync[1020]:   [QUORUM] This node is within the primary \
                component and will provide service.
Dec  2 07:09:23 vm1 corosync[1020]:   [QUORUM] Members[0]:
Dec  2 07:09:23 vm1 corosync[1020]:   [SERV  ] Service engine loaded: corosync vote \
                quorum service v1.0 [5]
Dec  2 07:09:23 vm1 corosync[1020]:   [QB    ] server name: votequorum
Dec  2 07:09:23 vm1 corosync[1020]:   [SERV  ] Service engine loaded: corosync \
                cluster quorum service v0.1 [3]
Dec  2 07:09:23 vm1 corosync[1020]:   [QB    ] server name: quorum
Dec  2 07:09:23 vm1 corosync[1020]:   [TOTEM ] A new membership (192.168.122.175:168) \
                was formed. Members joined: 1084783279
Dec  2 07:09:23 vm1 corosync[1020]:   [QUORUM] Members[1]: 1084783279
Dec  2 07:09:23 vm1 corosync[1020]:   [MAIN  ] Completed service synchronization, \
                ready to provide service.
Dec  2 07:09:23 vm1 pacemakerd[1062]:   notice: mcp_read_config: Configured corosync \
                to accept connections from group 113: OK (1)
Dec  2 07:09:23 vm1 pacemakerd[1062]:   notice: main: Starting Pacemaker 1.1.10 \
(Build: 42f2063):  generated-manpages agent-manpages ncurses libqb-logging libqb-ipc \
                lha-fencing upstart nagios  heartbeat corosync-native snmp libesmtp
Dec  2 07:09:23 vm1 pacemakerd[1062]:   notice: cluster_connect_quorum: Quorum \
                acquired
Dec  2 07:09:23 vm1 pacemakerd[1062]:   notice: corosync_node_name: Unable to get \
                node name for nodeid 1084783279
Dec  2 07:09:23 vm1 pacemakerd[1062]:   notice: get_node_name: Defaulting to uname -n \
                for the local corosync node name
Dec  2 07:09:23 vm1 pacemakerd[1062]:   notice: crm_update_peer_state: \
                pcmk_quorum_notification: Node vm1[1084783279] - state is now member \
                (was (null))
Dec  2 07:09:23 vm1 attrd[1073]:   notice: crm_cluster_connect: Connecting to cluster \
                infrastructure: corosync
Dec  2 07:09:23 vm1 stonith-ng[1071]:   notice: crm_cluster_connect: Connecting to \
                cluster infrastructure: corosync
Dec  2 07:09:23 vm1 crmd[1075]:   notice: main: CRM Git Version: 42f2063
Dec  2 07:09:23 vm1 attrd[1073]:   notice: corosync_node_name: Unable to get node \
                name for nodeid 1084783279
Dec  2 07:09:23 vm1 attrd[1073]:   notice: get_node_name: Defaulting to uname -n for \
                the local corosync node name
Dec  2 07:09:23 vm1 attrd[1073]:   notice: main: Starting mainloop...
Dec  2 07:09:23 vm1 stonith-ng[1071]:   notice: corosync_node_name: Unable to get \
                node name for nodeid 1084783279
Dec  2 07:09:23 vm1 stonith-ng[1071]:   notice: get_node_name: Defaulting to uname -n \
                for the local corosync node name
Dec  2 07:09:23 vm1 cib[1070]:   notice: crm_cluster_connect: Connecting to cluster \
                infrastructure: corosync
Dec  2 07:09:23 vm1 cib[1070]:   notice: corosync_node_name: Unable to get node name \
                for nodeid 1084783279
Dec  2 07:09:23 vm1 cib[1070]:   notice: get_node_name: Defaulting to uname -n for \
                the local corosync node name
Dec  2 07:09:24 vm1 crmd[1075]:   notice: crm_cluster_connect: Connecting to cluster \
                infrastructure: corosync
Dec  2 07:09:24 vm1 stonith-ng[1071]:   notice: setup_cib: Watching for stonith \
                topology changes
Dec  2 07:09:24 vm1 stonith-ng[1071]:   notice: unpack_config: On loss of CCM Quorum: \
                Ignore
Dec  2 07:09:24 vm1 crmd[1075]:   notice: corosync_node_name: Unable to get node name \
                for nodeid 1084783279
Dec  2 07:09:24 vm1 crmd[1075]:   notice: get_node_name: Defaulting to uname -n for \
                the local corosync node name
Dec  2 07:09:24 vm1 crmd[1075]:   notice: cluster_connect_quorum: Quorum acquired
Dec  2 07:09:24 vm1 crmd[1075]:   notice: crm_update_peer_state: \
                pcmk_quorum_notification: Node vm1[1084783279] - state is now member \
                (was (null))
Dec  2 07:09:25 vm1 crmd[1075]:   notice: corosync_node_name: Unable to get node name \
                for nodeid 1084783279
Dec  2 07:09:25 vm1 crmd[1075]:   notice: get_node_name: Defaulting to uname -n for \
                the local corosync node name
Dec  2 07:09:25 vm1 crmd[1075]:   notice: do_started: The local CRM is operational
Dec  2 07:09:25 vm1 crmd[1075]:   notice: do_state_transition: State transition \
                S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL \
                origin=do_started ]
Dec  2 07:09:33 vm1 ntpdate[620]: adjust time server 188.166.163.79 offset 0.412702 \
                sec
Dec  2 07:09:33 vm1 ntpd[1131]: ntpd 4.2.6p5@1.2349-o Wed Oct  5 12:35:25 UTC 2016 \
                (1)
Dec  2 07:09:33 vm1 ntpd[1132]: proto: precision = 0.103 usec
Dec  2 07:09:33 vm1 ntpd[1132]: ntp_io: estimated max descriptors: 1024, initial \
                socket boundary: 16
Dec  2 07:09:33 vm1 ntpd[1132]: Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123
Dec  2 07:09:33 vm1 ntpd[1132]: Listen and drop on 1 v6wildcard :: UDP 123
Dec  2 07:09:33 vm1 ntpd[1132]: Listen normally on 2 lo 127.0.0.1 UDP 123
Dec  2 07:09:33 vm1 ntpd[1132]: Listen normally on 3 eth0 192.168.122.175 UDP 123
Dec  2 07:09:33 vm1 ntpd[1132]: Listen normally on 4 lo ::1 UDP 123
Dec  2 07:09:33 vm1 ntpd[1132]: Listen normally on 5 eth0 fe80::5054:ff:fe74:8cc7 UDP \
                123
Dec  2 07:09:33 vm1 ntpd[1132]: peers refreshed
Dec  2 07:09:33 vm1 ntpd[1132]: Listening on routing socket on fd #22 for interface \
                updates
Dec  2 07:09:38 vm1 ntpd[1132]: ntpd exiting on signal 15
Dec  2 07:09:46 vm1 crmd[1075]:  warning: do_log: FSA: Input I_DC_TIMEOUT from \
                crm_timer_popped() received in state S_PENDING
Dec  2 07:09:46 vm1 crmd[1075]:   notice: do_state_transition: State transition \
S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL \
                origin=do_election_check ]
Dec  2 07:09:46 vm1 cib[1070]:   notice: corosync_node_name: Unable to get node name \
                for nodeid 1084783279
Dec  2 07:09:46 vm1 cib[1070]:   notice: get_node_name: Defaulting to uname -n for \
                the local corosync node name
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_local_callback: Sending full refresh \
                (origin=crmd)
Dec  2 07:09:46 vm1 pengine[1074]:   notice: unpack_config: On loss of CCM Quorum: \
                Ignore
Dec  2 07:09:46 vm1 pengine[1074]:   notice: LogActions: Start   dlm:0#011(vm1)
Dec  2 07:09:46 vm1 pengine[1074]:   notice: LogActions: Start   clvmd:0#011(vm1)
Dec  2 07:09:46 vm1 pengine[1074]:   notice: LogActions: Start   vg#011(vm1)
Dec  2 07:09:46 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 4: \
                monitor dlm:0_monitor_0 on vm1 (local)
Dec  2 07:09:46 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 5: \
                monitor clvmd:0_monitor_0 on vm1 (local)
Dec  2 07:09:46 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 6: \
                monitor vg_monitor_0 on vm1 (local)
Dec  2 07:09:46 vm1 pengine[1074]:   notice: process_pe_message: Calculated \
                Transition 0: /var/lib/pacemaker/pengine/pe-input-141.bz2
Dec  2 07:09:46 vm1 LVM(vg)[1235]: WARNING: LVM Volume vg is not available (stopped)
Dec  2 07:09:46 vm1 LVM(vg)[1235]: INFO: LVM Volume vg is offline
Dec  2 07:09:46 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation \
                clvmd_monitor_0 (call, rc=7, cib-update#, confirmed=true) not \
                running
Dec  2 07:09:46 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation \
                dlm_monitor_0 (call=6, rc=7, cib-update$, confirmed=true) not running
Dec  2 07:09:46 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation \
                vg_monitor_0 (call, rc=7, cib-update%, confirmed=true) not running
Dec  2 07:09:46 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 3: \
                probe_complete probe_complete on vm1 (local) - no waiting
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_trigger_update: Sending flush op to \
                all hosts for: probe_complete (true)
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_perform_update: Sent update 4: \
                probe_complete=true
Dec  2 07:09:46 vm1 attrd[1073]:   notice: corosync_node_name: Unable to get node \
                name for nodeid 1084783279
Dec  2 07:09:46 vm1 attrd[1073]:   notice: get_node_name: Defaulting to uname -n for \
                the local corosync node name
Dec  2 07:09:46 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 7: start \
                dlm:0_start_0 on vm1 (local)
Dec  2 07:09:46 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 23: start \
                vg_start_0 on vm1 (local)
Dec  2 07:09:46 vm1 dlm_controld[1285]: 25 dlm_controld 4.0.1 started
Dec  2 07:09:46 vm1 LVM(vg)[1278]: INFO: Activating volume group vg
Dec  2 07:09:46 vm1 kernel: [   25.367705] dlm_controld wrote to rmem_default when \
                file position was not 0!
Dec  2 07:09:46 vm1 kernel: [   25.367705] This will not be supported in the future. \
                To silence this
Dec  2 07:09:46 vm1 kernel: [   25.367705] warning, set kernel.sysctl_writes_strict = \
                -1
Dec  2 07:09:46 vm1 LVM(vg)[1278]: ERROR: connect() failed on local socket: No such \
file or directory Internal cluster locking initialisation failed. WARNING: Falling \
back to local file-based locking. Volume Groups with the clustered attribute will be \
inaccessible. Reading all physical volumes. This may take a while... Found volume \
                group "vm1-vg" using metadata type lvm2 Skipping clustered volume \
                group vg
Dec  2 07:09:46 vm1 LVM(vg)[1278]: ERROR: connect() failed on local socket: No such \
file or directory Internal cluster locking initialisation failed. WARNING: Falling \
back to local file-based locking. Volume Groups with the clustered attribute will be \
                inaccessible. Skipping clustered volume group vg
Dec  2 07:09:46 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation vg_start_0 \
                (call", rc=1, cib-update&, confirmed=true) unknown error
Dec  2 07:09:46 vm1 crmd[1075]:  warning: status_from_rc: Action 23 (vg_start_0) on \
                vm1 failed (target: 0 vs. rc: 1): Error
Dec  2 07:09:46 vm1 crmd[1075]:  warning: update_failcount: Updating failcount for vg \
                on vm1 after failed start: rc=1 (update=INFINITY, time80658986)
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_trigger_update: Sending flush op to \
                all hosts for: fail-count-vg (INFINITY)
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_perform_update: Sent update 7: \
                fail-count-vg=INFINITY
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_trigger_update: Sending flush op to \
                all hosts for: last-failure-vg (1480658986)
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_perform_update: Sent update 10: \
                last-failure-vg80658986
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_trigger_update: Sending flush op to \
                all hosts for: fail-count-vg (INFINITY)
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_perform_update: Sent update 12: \
                fail-count-vg=INFINITY
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_trigger_update: Sending flush op to \
                all hosts for: last-failure-vg (1480658986)
Dec  2 07:09:46 vm1 attrd[1073]:   notice: attrd_perform_update: Sent update 14: \
                last-failure-vg80658986
Dec  2 07:09:46 vm1 crmd[1075]:  warning: update_failcount: Updating failcount for vg \
                on vm1 after failed start: rc=1 (update=INFINITY, time80658986)
Dec  2 07:09:47 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation \
                dlm_start_0 (call , rc=0, cib-update', confirmed=true) ok
Dec  2 07:09:47 vm1 crmd[1075]:   notice: run_graph: Transition 0 (Complete=9, \
Pending=0, Fired=0, Skipped=5, Incomplete=1, \
                Source=/var/lib/pacemaker/pengine/pe-input-141.bz2): Stopped
Dec  2 07:09:47 vm1 pengine[1074]:   notice: unpack_config: On loss of CCM Quorum: \
                Ignore
Dec  2 07:09:47 vm1 pengine[1074]:  warning: unpack_rsc_op: Processing failed op \
                start for vg on vm1: unknown error (1)
Dec  2 07:09:47 vm1 pengine[1074]:  warning: common_apply_stickiness: Forcing vg away \
                from vm1 after 1000000 failures (max00000)
Dec  2 07:09:47 vm1 pengine[1074]:   notice: LogActions: Start   clvmd:0#011(vm1)
Dec  2 07:09:47 vm1 pengine[1074]:   notice: LogActions: Stop    vg#011(vm1)
Dec  2 07:09:47 vm1 pengine[1074]:   notice: process_pe_message: Calculated \
                Transition 1: /var/lib/pacemaker/pengine/pe-input-142.bz2
Dec  2 07:09:47 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 7: \
                monitor dlm_monitor_60000 on vm1 (local)
Dec  2 07:09:47 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 1: stop \
                vg_stop_0 on vm1 (local)
Dec  2 07:09:47 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 8: start \
                clvmd_start_0 on vm1 (local)
Dec  2 07:09:47 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation \
                dlm_monitor_60000 (call&, rc=0, cib-update), confirmedúlse) ok
Dec  2 07:09:47 vm1 clvmd(clvmd)[1309]: INFO: Starting clvmd
Dec  2 07:09:47 vm1 clvmd[1332]: CLVMD started
Dec  2 07:09:47 vm1 kernel: [   26.403628] dlm: Using SCTP for communications
Dec  2 07:09:47 vm1 kernel: [   26.413789] sctp: Hash tables configured (established \
                256 bind 256)
Dec  2 07:09:47 vm1 kernel: [   26.415925] dlm: clvmd: joining the lockspace group...
Dec  2 07:09:47 vm1 kernel: [   26.418602] dlm: clvmd: group event done 0 0
Dec  2 07:09:47 vm1 kernel: [   26.419054] dlm: clvmd: dlm_recover 1
Dec  2 07:09:47 vm1 kernel: [   26.419065] dlm: clvmd: add member 1084783279
Dec  2 07:09:47 vm1 kernel: [   26.419067] dlm: clvmd: dlm_recover_members 1 nodes
Dec  2 07:09:47 vm1 kernel: [   26.419068] dlm: clvmd: join complete
Dec  2 07:09:47 vm1 kernel: [   26.419557] dlm: clvmd: generation 1 slots 1 \
                1:1084783279
Dec  2 07:09:47 vm1 kernel: [   26.419558] dlm: clvmd: dlm_recover_directory
Dec  2 07:09:47 vm1 kernel: [   26.419559] dlm: clvmd: dlm_recover_directory 0 in 0 \
                new
Dec  2 07:09:47 vm1 kernel: [   26.419560] dlm: clvmd: dlm_recover_directory 0 out 0 \
                messages
Dec  2 07:09:47 vm1 kernel: [   26.419566] dlm: clvmd: dlm_recover 1 generation 1 \
                done: 0 ms
Dec  2 07:09:47 vm1 LVM(vg)[1308]: INFO: Deactivating volume group vg
Dec  2 07:09:48 vm1 clvmd[1332]: Created DLM lockspace for CLVMD.
Dec  2 07:09:48 vm1 clvmd[1332]: DLM initialisation complete
Dec  2 07:09:48 vm1 clvmd[1332]: Our local node id is 1084783279
Dec  2 07:09:48 vm1 clvmd[1332]: Connected to Corosync
Dec  2 07:09:48 vm1 clvmd[1332]: Cluster LVM daemon started - connected to Corosync
Dec  2 07:09:48 vm1 clvmd[1332]: Cluster ready, doing some more initialisation
Dec  2 07:09:48 vm1 clvmd[1332]: starting LVM thread
Dec  2 07:09:48 vm1 clvmd[1332]: LVM thread function started
Dec  2 07:09:48 vm1 lvm[1332]: clvmd ready for work
Dec  2 07:09:48 vm1 lvm[1332]: Using timeout of 60 seconds
Dec  2 07:09:48 vm1 lvm[1332]: confchg callback. 1 joined, 0 left, 1 members
Dec  2 07:09:48 vm1 lvm[1332]: Got new connection on fd 5
Dec  2 07:09:48 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:09:48 vm1 lvm[1332]: creating pipe, [13, 14]
Dec  2 07:09:48 vm1 lvm[1332]: Creating pre&post thread
Dec  2 07:09:48 vm1 lvm[1332]: Created pre&post thread, state = 0
Dec  2 07:09:48 vm1 lvm[1332]: Sub thread ready for work.
Dec  2 07:09:48 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:09:48 vm1 lvm[1332]: in sub thread: client = 0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 1 \
                (client=0x8517f0)
Dec  2 07:09:48 vm1 lvm[1332]: lock_resource 'V_vg', flags=0, mode=3
Dec  2 07:09:48 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:09:48 vm1 lvm[1332]: Writing status 0 down pipe 14
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:09:48 vm1 lvm[1332]: distribute command: XID = 0, flags=0x1 (LOCAL)
Dec  2 07:09:48 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851b60. client=0x8517f0, \
                msg=0x851900, len$, csid=(nil), xid=0
Dec  2 07:09:48 vm1 lvm[1332]: process_work_item: local
Dec  2 07:09:48 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851ba0, \
                msglen $, client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x1 LCK_VG \
                (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:09:48 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:09:48 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:09:48 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:09:48 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:09:48 vm1 lvm[1332]: Got post command condition...
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:09:48 vm1 lvm[1332]: Send local reply
Dec  2 07:09:48 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:09:48 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Writing status 0 down pipe 14
Dec  2 07:09:48 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: distribute command: XID = 1, flags=0x1 (LOCAL)
Dec  2 07:09:48 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851b90. client=0x8517f0, \
                msg=0x851b60, len1, csid=(nil), xid=1
Dec  2 07:09:48 vm1 lvm[1332]: process_work_item: local
Dec  2 07:09:48 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851bd0, \
                msglen 1, client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Syncing device names
Dec  2 07:09:48 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:09:48 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:09:48 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:09:48 vm1 lvm[1332]: Got post command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Send local reply
Dec  2 07:09:48 vm1 lvm[1332]: Read on local socket 5, len = 84
Dec  2 07:09:48 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:09:48 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:09:48 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Writing status 0 down pipe 14
Dec  2 07:09:48 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: distribute command: XID = 2, flags=0x0 ()
Dec  2 07:09:48 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:09:48 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851e50. client=0x8517f0, \
                msg=0x851b90, len„, csid=(nil), xid=2
Dec  2 07:09:48 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:09:48 vm1 lvm[1332]: process_work_item: local
Dec  2 07:09:48 vm1 lvm[1332]: process_local_command: LOCK_QUERY (0x34) msg=0x851bf0, \
                msglen „, client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: do_lock_query: resource \
                'Y9s0qu8doBG5RzmdlanzhUeWenRLq30OwuDvJhYLEnPWcqlIqnearqNGu0l0OcNi', \
                mode -1 (?)
Dec  2 07:09:48 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:09:48 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:09:48 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:09:48 vm1 lvm[1332]: Got post command condition...
Dec  2 07:09:48 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 84
Dec  2 07:09:48 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Send local reply
Dec  2 07:09:48 vm1 lvm[1332]: Read on local socket 5, len = 84
Dec  2 07:09:48 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Writing status 0 down pipe 14
Dec  2 07:09:48 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: distribute command: XID = 3, flags=0x1 (LOCAL)
Dec  2 07:09:48 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851bf0. client=0x8517f0, \
                msg=0x851b90, len„, csid=(nil), xid=3
Dec  2 07:09:48 vm1 lvm[1332]: process_work_item: local
Dec  2 07:09:48 vm1 lvm[1332]: process_local_command: LOCK_LV (0x32) msg=0x851e50, \
                msglen „, client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: do_lock_lv: resource \
'Y9s0qu8doBG5RzmdlanzhUeWenRLq30OwuDvJhYLEnPWcqlIqnearqNGu0l0OcNi', cmd = 0x98 \
LCK_LV_DEACTIVATE (NULL|LV|NONBLOCK|CLUSTER_VG), flags = 0x0 ( ), critical_section = \
                0
Dec  2 07:09:48 vm1 lvm[1332]: do_deactivate_lock, lock not already held
Dec  2 07:09:48 vm1 lvm[1332]: Command return is 0, critical_section is 0
Dec  2 07:09:48 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:09:48 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:09:48 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:09:48 vm1 lvm[1332]: Got post command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Send local reply
Dec  2 07:09:48 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:09:48 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:09:48 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:09:48 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Writing status 0 down pipe 14
Dec  2 07:09:48 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: distribute command: XID = 4, flags=0x0 ()
Dec  2 07:09:48 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:09:48 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851df0. client=0x8517f0, \
                msg=0x851b60, len1, csid=(nil), xid=4
Dec  2 07:09:48 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:09:48 vm1 lvm[1332]: process_work_item: local
Dec  2 07:09:48 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851b90, \
                msglen 1, client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Syncing device names
Dec  2 07:09:48 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:09:48 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:09:48 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:09:48 vm1 lvm[1332]: Got post command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Send local reply
Dec  2 07:09:48 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:09:48 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:09:48 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 6 \
                (client=0x8517f0)
Dec  2 07:09:48 vm1 lvm[1332]: unlock_resource: V_vg lockid: 1
Dec  2 07:09:48 vm1 lvm[1332]: Writing status 0 down pipe 14
Dec  2 07:09:48 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: distribute command: XID = 5, flags=0x1 (LOCAL)
Dec  2 07:09:48 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851b60. client=0x8517f0, \
                msg=0x851900, len$, csid=(nil), xid=5
Dec  2 07:09:48 vm1 lvm[1332]: process_work_item: local
Dec  2 07:09:48 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851ba0, \
                msglen $, client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x6 LCK_VG \
                (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:09:48 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:09:48 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:09:48 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:09:48 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:09:48 vm1 lvm[1332]: Got post command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:09:48 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 31
Dec  2 07:09:48 vm1 lvm[1332]: read on PIPE 13: 4 bytes: status: 0
Dec  2 07:09:48 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:09:48 vm1 lvm[1332]: Send local reply
Dec  2 07:09:48 vm1 LVM(vg)[1308]: INFO: 0 logical volume(s) in volume group "vg" now \
                active
Dec  2 07:09:48 vm1 lvm[1332]: Read on local socket 5, len = 0
Dec  2 07:09:48 vm1 lvm[1332]: EOF on local socket: inprogress=0
Dec  2 07:09:48 vm1 lvm[1332]: Waiting for child thread
Dec  2 07:09:48 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:09:48 vm1 lvm[1332]: Subthread finished
Dec  2 07:09:48 vm1 lvm[1332]: Joined child thread
Dec  2 07:09:48 vm1 lvm[1332]: ret == 0, errno = 0. removing client
Dec  2 07:09:48 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851900. client=0x8517f0, \
                msg=(nil), len=0, csid=(nil), xid=5
Dec  2 07:09:48 vm1 lvm[1332]: process_work_item: free fd -1
Dec  2 07:09:48 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:09:48 vm1 LVM(vg)[1308]: INFO: LVM Volume vg is not available (stopped)
Dec  2 07:09:48 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation vg_stop_0 \
                (call(, rc=0, cib-update0, confirmed=true) ok
Dec  2 07:09:48 vm1 ntpdate[1229]: adjust time server 37.120.191.245 offset 0.411105 \
                sec
Dec  2 07:09:48 vm1 ntpd[1388]: ntpd 4.2.6p5@1.2349-o Wed Oct  5 12:35:25 UTC 2016 \
                (1)
Dec  2 07:09:48 vm1 ntpd[1389]: proto: precision = 0.134 usec
Dec  2 07:09:48 vm1 ntpd[1389]: ntp_io: estimated max descriptors: 1024, initial \
                socket boundary: 16
Dec  2 07:09:48 vm1 ntpd[1389]: Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123
Dec  2 07:09:48 vm1 ntpd[1389]: Listen and drop on 1 v6wildcard :: UDP 123
Dec  2 07:09:48 vm1 ntpd[1389]: Listen normally on 2 lo 127.0.0.1 UDP 123
Dec  2 07:09:48 vm1 ntpd[1389]: Listen normally on 3 eth0 192.168.122.175 UDP 123
Dec  2 07:09:48 vm1 ntpd[1389]: Listen normally on 4 lo ::1 UDP 123
Dec  2 07:09:48 vm1 ntpd[1389]: Listen normally on 5 eth0 fe80::5054:ff:fe74:8cc7 UDP \
                123
Dec  2 07:09:48 vm1 ntpd[1389]: peers refreshed
Dec  2 07:09:48 vm1 ntpd[1389]: Listening on routing socket on fd #22 for interface \
                updates
Dec  2 07:09:50 vm1 lrmd[1072]:   notice: operation_finished: \
clvmd_start_0:1309:stderr [   local socket: connect failed: No such file or directory \
                ]
Dec  2 07:09:50 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation \
                clvmd_start_0 (call0, rc=0, cib-update1, confirmed=true) ok
Dec  2 07:09:50 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 9: \
                monitor clvmd_monitor_40000 on vm1 (local)
Dec  2 07:09:50 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation \
                clvmd_monitor_40000 (call5, rc=0, cib-update2, confirmedúlse) ok
Dec  2 07:09:50 vm1 crmd[1075]:   notice: run_graph: Transition 1 (Complete=9, \
Pending=0, Fired=0, Skipped=0, Incomplete=0, \
                Source=/var/lib/pacemaker/pengine/pe-input-142.bz2): Complete
Dec  2 07:09:50 vm1 crmd[1075]:   notice: do_state_transition: State transition \
S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL \
                origin=notify_crmd ]
Dec  2 07:17:01 vm1 CRON[1566]: (root) CMD (   cd / && run-parts --report \
                /etc/cron.hourly)
Dec  2 07:24:50 vm1 crmd[1075]:   notice: do_state_transition: State transition \
S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED \
                origin=crm_timer_popped ]
Dec  2 07:24:50 vm1 pengine[1074]:   notice: unpack_config: On loss of CCM Quorum: \
                Ignore
Dec  2 07:24:50 vm1 pengine[1074]:  warning: unpack_rsc_op: Processing failed op \
                start for vg on vm1: unknown error (1)
Dec  2 07:24:50 vm1 pengine[1074]:  warning: common_apply_stickiness: Forcing vg away \
                from vm1 after 1000000 failures (max00000)
Dec  2 07:24:50 vm1 crmd[1075]:   notice: run_graph: Transition 2 (Complete=0, \
Pending=0, Fired=0, Skipped=0, Incomplete=0, \
                Source=/var/lib/pacemaker/pengine/pe-input-143.bz2): Complete
Dec  2 07:24:50 vm1 crmd[1075]:   notice: do_state_transition: State transition \
S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL \
                origin=notify_crmd ]
Dec  2 07:24:50 vm1 pengine[1074]:   notice: process_pe_message: Calculated \
                Transition 2: /var/lib/pacemaker/pengine/pe-input-143.bz2
Dec  2 07:33:57 vm1 dhclient: DHCPREQUEST of 192.168.122.175 on eth0 to 192.168.122.1 \
                port 67 (xid=0x1068268f)
Dec  2 07:33:57 vm1 dhclient: DHCPACK of 192.168.122.175 from 192.168.122.1
Dec  2 07:33:57 vm1 dhclient: bound to 192.168.122.175 -- renewal in 1616 seconds.
Dec  2 07:39:50 vm1 crmd[1075]:   notice: do_state_transition: State transition \
S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED \
                origin=crm_timer_popped ]
Dec  2 07:39:50 vm1 pengine[1074]:   notice: unpack_config: On loss of CCM Quorum: \
                Ignore
Dec  2 07:39:50 vm1 pengine[1074]:  warning: unpack_rsc_op: Processing failed op \
                start for vg on vm1: unknown error (1)
Dec  2 07:39:50 vm1 pengine[1074]:  warning: common_apply_stickiness: Forcing vg away \
                from vm1 after 1000000 failures (max00000)
Dec  2 07:39:50 vm1 crmd[1075]:   notice: run_graph: Transition 3 (Complete=0, \
Pending=0, Fired=0, Skipped=0, Incomplete=0, \
                Source=/var/lib/pacemaker/pengine/pe-input-143.bz2): Complete
Dec  2 07:39:50 vm1 crmd[1075]:   notice: do_state_transition: State transition \
S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL \
                origin=notify_crmd ]
Dec  2 07:39:50 vm1 pengine[1074]:   notice: process_pe_message: Calculated \
                Transition 3: /var/lib/pacemaker/pengine/pe-input-143.bz2
Dec  2 07:43:08 vm1 lvm[1332]: Got new connection on fd 5
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:43:08 vm1 lvm[1332]: creating pipe, [12, 13]
Dec  2 07:43:08 vm1 lvm[1332]: Creating pre&post thread
Dec  2 07:43:08 vm1 lvm[1332]: Created pre&post thread, state = 0
Dec  2 07:43:08 vm1 lvm[1332]: in sub thread: client = 0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 1 \
                (client=0x8517f0)
Dec  2 07:43:08 vm1 lvm[1332]: lock_resource 'V_vm1-vg', flags=0, mode=3
Dec  2 07:43:08 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 6, flags=0x1 (LOCAL)
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, \
                msg=0x851900, len(, csid=(nil), xid=6
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, \
                msglen (, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x1 LCK_VG \
                (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:08 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 7, flags=0x1 (LOCAL)
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, \
                msg=0x851900, len1, csid=(nil), xid=7
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851970, \
                msglen 1, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 8, flags=0x1 (LOCAL)
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, \
                msg=0x851900, len1, csid=(nil), xid=8
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851970, \
                msglen 1, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:08 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:08 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 9, flags=0x0 ()
Dec  2 07:43:08 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, \
                msg=0x851900, len1, csid=(nil), xid=9
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, \
                msglen 1, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 31
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 6 \
                (client=0x8517f0)
Dec  2 07:43:08 vm1 lvm[1332]: unlock_resource: V_vm1-vg lockid: 1
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 10, flags=0x1 (LOCAL)
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, \
                msg=0x851900, len(, csid=(nil), xid
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, \
                msglen (, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x6 LCK_VG \
                (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:08 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 1 \
                (client=0x8517f0)
Dec  2 07:43:08 vm1 lvm[1332]: lock_resource 'V_vg', flags=0, mode=3
Dec  2 07:43:08 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 11, flags=0x1 (LOCAL)
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, \
                msg=0x851930, len$, csid=(nil), xid
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, \
                msglen $, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x1 LCK_VG \
                (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:08 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 12, flags=0x1 (LOCAL)
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, \
                msg=0x851900, len1, csid=(nil), xid
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851970, \
                msglen 1, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:08 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:08 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 13, flags=0x0 ()
Dec  2 07:43:08 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, \
                msg=0x851900, len1, csid=(nil), xid
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, \
                msglen 1, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 31
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 6 \
                (client=0x8517f0)
Dec  2 07:43:08 vm1 lvm[1332]: unlock_resource: V_vg lockid: 1
Dec  2 07:43:08 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:08 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: distribute command: XID = 14, flags=0x1 (LOCAL)
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, \
                msg=0x851930, len$, csid=(nil), xid
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:08 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, \
                msglen $, client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x6 LCK_VG \
                (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:08 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:43:08 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:08 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:08 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:08 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:08 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:08 vm1 lvm[1332]: Send local reply
Dec  2 07:43:08 vm1 lvm[1332]: Read on local socket 5, len = 0
Dec  2 07:43:08 vm1 lvm[1332]: EOF on local socket: inprogress=0
Dec  2 07:43:08 vm1 lvm[1332]: Waiting for child thread
Dec  2 07:43:08 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:08 vm1 lvm[1332]: Subthread finished
Dec  2 07:43:08 vm1 lvm[1332]: Joined child thread
Dec  2 07:43:08 vm1 lvm[1332]: ret == 0, errno = 0. removing client
Dec  2 07:43:08 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851900. client=0x8517f0, \
                msg=(nil), len=0, csid=(nil), xid
Dec  2 07:43:08 vm1 lvm[1332]: process_work_item: free fd -1
Dec  2 07:43:08 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Got new connection on fd 5
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 29
Dec  2 07:43:09 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:09 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:09 vm1 lvm[1332]: creating pipe, [12, 13]
Dec  2 07:43:09 vm1 lvm[1332]: Creating pre&post thread
Dec  2 07:43:09 vm1 lvm[1332]: Created pre&post thread, state = 0
Dec  2 07:43:09 vm1 lvm[1332]: in sub thread: client = 0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: doing PRE command LOCK_VG 'P_#global' at 1 \
                (client=0x8517f0)
Dec  2 07:43:09 vm1 lvm[1332]: lock_resource 'P_#global', flags=0, mode=3
Dec  2 07:43:09 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:43:09 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:09 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: distribute command: XID = 15, flags=0x0 ()
Dec  2 07:43:09 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, \
                msg=0x851900, len), csid=(nil), xid
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:09 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851930, \
                msglen ), client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: do_lock_vg: resource 'P_#global', cmd = 0x1 LCK_VG \
                (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:09 vm1 lvm[1332]: Refreshing context
Dec  2 07:43:09 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:09 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 29
Dec  2 07:43:09 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:09 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:09 vm1 lvm[1332]: Send local reply
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:43:09 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:09 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 1 \
                (client=0x8517f0)
Dec  2 07:43:09 vm1 lvm[1332]: lock_resource 'V_vm1-vg', flags=0, mode=3
Dec  2 07:43:09 vm1 lvm[1332]: lock_resource returning 0, lock_id=2
Dec  2 07:43:09 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:09 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: distribute command: XID = 16, flags=0x1 (LOCAL)
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, \
                msg=0x851900, len(, csid=(nil), xid
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:09 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, \
                msglen (, client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x1 LCK_VG \
                (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:09 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:43:09 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:09 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Send local reply
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:09 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:09 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:09 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:09 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: distribute command: XID = 17, flags=0x0 ()
Dec  2 07:43:09 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, \
                msg=0x851900, len1, csid=(nil), xid
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:09 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, \
                msglen 1, client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:09 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:09 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:09 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:09 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 31
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Send local reply
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:43:09 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:09 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 6 \
                (client=0x8517f0)
Dec  2 07:43:09 vm1 lvm[1332]: unlock_resource: V_vm1-vg lockid: 2
Dec  2 07:43:09 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:09 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: distribute command: XID = 18, flags=0x1 (LOCAL)
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, \
                msg=0x851900, len(, csid=(nil), xid
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:09 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, \
                msglen (, client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x6 LCK_VG \
                (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:09 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:43:09 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:09 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Send local reply
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:43:09 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:09 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 1 \
                (client=0x8517f0)
Dec  2 07:43:09 vm1 lvm[1332]: lock_resource 'V_vg', flags=0, mode=3
Dec  2 07:43:09 vm1 lvm[1332]: lock_resource returning 0, lock_id=2
Dec  2 07:43:09 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:09 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: distribute command: XID = 19, flags=0x1 (LOCAL)
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, \
                msg=0x851930, len$, csid=(nil), xid
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:09 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, \
                msglen $, client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x1 LCK_VG \
                (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:09 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:43:09 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:09 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Send local reply
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:09 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:09 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:09 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:09 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: distribute command: XID = 20, flags=0x0 ()
Dec  2 07:43:09 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, \
                msg=0x851900, len1, csid=(nil), xid 
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:09 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, \
                msglen 1, client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:09 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:09 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:09 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:09 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 31
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Send local reply
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:43:09 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:09 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 6 \
                (client=0x8517f0)
Dec  2 07:43:09 vm1 lvm[1332]: unlock_resource: V_vg lockid: 2
Dec  2 07:43:09 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:09 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: distribute command: XID = 21, flags=0x1 (LOCAL)
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, \
                msg=0x851930, len$, csid=(nil), xid!
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:09 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, \
                msglen $, client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x6 LCK_VG \
                (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:09 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:43:09 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:09 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Send local reply
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 29
Dec  2 07:43:09 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:09 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:09 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:09 vm1 lvm[1332]: doing PRE command LOCK_VG 'P_#global' at 6 \
                (client=0x8517f0)
Dec  2 07:43:09 vm1 lvm[1332]: unlock_resource: P_#global lockid: 1
Dec  2 07:43:09 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:09 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: distribute command: XID = 22, flags=0x0 ()
Dec  2 07:43:09 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, \
                msg=0x851900, len), csid=(nil), xid"
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:09 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851930, \
                msglen ), client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG \
                (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:09 vm1 lvm[1332]: Refreshing context
Dec  2 07:43:09 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:09 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 29
Dec  2 07:43:09 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:09 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:09 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:09 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:09 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:09 vm1 lvm[1332]: Send local reply
Dec  2 07:43:09 vm1 lvm[1332]: Read on local socket 5, len = 0
Dec  2 07:43:09 vm1 lvm[1332]: EOF on local socket: inprogress=0
Dec  2 07:43:09 vm1 lvm[1332]: Waiting for child thread
Dec  2 07:43:09 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:09 vm1 lvm[1332]: Subthread finished
Dec  2 07:43:09 vm1 lvm[1332]: Joined child thread
Dec  2 07:43:09 vm1 lvm[1332]: ret == 0, errno = 0. removing client
Dec  2 07:43:09 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851900. client=0x8517f0, \
                msg=(nil), len=0, csid=(nil), xid"
Dec  2 07:43:09 vm1 lvm[1332]: process_work_item: free fd -1
Dec  2 07:43:09 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Got new connection on fd 5
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:43:16 vm1 lvm[1332]: creating pipe, [12, 13]
Dec  2 07:43:16 vm1 lvm[1332]: Creating pre&post thread
Dec  2 07:43:16 vm1 lvm[1332]: Created pre&post thread, state = 0
Dec  2 07:43:16 vm1 lvm[1332]: in sub thread: client = 0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 1 \
                (client=0x8517f0)
Dec  2 07:43:16 vm1 lvm[1332]: lock_resource 'V_vm1-vg', flags=0, mode=3
Dec  2 07:43:16 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 23, flags=0x1 (LOCAL)
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, \
                msg=0x851900, len(, csid=(nil), xid#
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, \
                msglen (, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x1 LCK_VG \
                (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:16 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 24, flags=0x1 (LOCAL)
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, \
                msg=0x851900, len1, csid=(nil), xid$
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851970, \
                msglen 1, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 25, flags=0x1 (LOCAL)
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, \
                msg=0x851900, len1, csid=(nil), xid%
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851970, \
                msglen 1, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:16 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:16 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 26, flags=0x0 ()
Dec  2 07:43:16 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, \
                msg=0x851900, len1, csid=(nil), xid&
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, \
                msglen 1, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 31
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 6 \
                (client=0x8517f0)
Dec  2 07:43:16 vm1 lvm[1332]: unlock_resource: V_vm1-vg lockid: 1
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 27, flags=0x1 (LOCAL)
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, \
                msg=0x851900, len(, csid=(nil), xid'
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, \
                msglen (, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x6 LCK_VG \
                (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:16 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 1 \
                (client=0x8517f0)
Dec  2 07:43:16 vm1 lvm[1332]: lock_resource 'V_vg', flags=0, mode=3
Dec  2 07:43:16 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 28, flags=0x1 (LOCAL)
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, \
                msg=0x851930, len$, csid=(nil), xid(
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, \
                msglen $, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x1 LCK_VG \
                (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:16 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 29, flags=0x1 (LOCAL)
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, \
                msg=0x851900, len1, csid=(nil), xid)
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851970, \
                msglen 1, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:16 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:16 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 30, flags=0x0 ()
Dec  2 07:43:16 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, \
                msg=0x851900, len1, csid=(nil), xid0
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, \
                msglen 1, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 31
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 6 \
                (client=0x8517f0)
Dec  2 07:43:16 vm1 lvm[1332]: unlock_resource: V_vg lockid: 1
Dec  2 07:43:16 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:16 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: distribute command: XID = 31, flags=0x1 (LOCAL)
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, \
                msg=0x851930, len$, csid=(nil), xid1
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:16 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, \
                msglen $, client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x6 LCK_VG \
                (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:16 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:43:16 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:16 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:16 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:16 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:16 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:16 vm1 lvm[1332]: Send local reply
Dec  2 07:43:16 vm1 lvm[1332]: Read on local socket 5, len = 0
Dec  2 07:43:16 vm1 lvm[1332]: EOF on local socket: inprogress=0
Dec  2 07:43:16 vm1 lvm[1332]: Waiting for child thread
Dec  2 07:43:16 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:16 vm1 lvm[1332]: Subthread finished
Dec  2 07:43:16 vm1 lvm[1332]: Joined child thread
Dec  2 07:43:16 vm1 lvm[1332]: ret == 0, errno = 0. removing client
Dec  2 07:43:16 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851900. client=0x8517f0, \
                msg=(nil), len=0, csid=(nil), xid1
Dec  2 07:43:16 vm1 lvm[1332]: process_work_item: free fd -1
Dec  2 07:43:16 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:24 vm1 lvm[1332]: Got new connection on fd 5
Dec  2 07:43:24 vm1 lvm[1332]: Read on local socket 5, len = 29
Dec  2 07:43:24 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:24 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:24 vm1 lvm[1332]: creating pipe, [12, 13]
Dec  2 07:43:24 vm1 lvm[1332]: Creating pre&post thread
Dec  2 07:43:24 vm1 lvm[1332]: Created pre&post thread, state = 0
Dec  2 07:43:24 vm1 lvm[1332]: in sub thread: client = 0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: doing PRE command LOCK_VG 'P_#global' at 1 \
                (client=0x8517f0)
Dec  2 07:43:24 vm1 lvm[1332]: lock_resource 'P_#global', flags=0, mode=3
Dec  2 07:43:24 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:43:24 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:24 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: distribute command: XID = 32, flags=0x0 ()
Dec  2 07:43:24 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:24 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, \
                msg=0x851900, len), csid=(nil), xid2
Dec  2 07:43:24 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:24 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851930, \
                msglen ), client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: do_lock_vg: resource 'P_#global', cmd = 0x1 LCK_VG \
                (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:24 vm1 lvm[1332]: Refreshing context
Dec  2 07:43:24 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:24 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 29
Dec  2 07:43:24 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:24 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:24 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:24 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:24 vm1 lvm[1332]: Send local reply
Dec  2 07:43:24 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:43:24 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:24 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 1 \
                (client=0x8517f0)
Dec  2 07:43:24 vm1 lvm[1332]: lock_resource 'V_vm1-vg', flags=0, mode=3
Dec  2 07:43:24 vm1 lvm[1332]: lock_resource returning 0, lock_id=2
Dec  2 07:43:24 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:24 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: distribute command: XID = 33, flags=0x1 (LOCAL)
Dec  2 07:43:24 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, \
                msg=0x851900, len(, csid=(nil), xid3
Dec  2 07:43:24 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:24 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, \
                msglen (, client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x1 LCK_VG \
                (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:24 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:43:24 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:24 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:24 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:24 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:24 vm1 lvm[1332]: Send local reply
Dec  2 07:43:24 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:24 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:24 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:24 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:24 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:24 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: distribute command: XID = 34, flags=0x0 ()
Dec  2 07:43:24 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:24 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, \
                msg=0x851900, len1, csid=(nil), xid4
Dec  2 07:43:24 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:24 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, \
                msglen 1, client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:24 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:24 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:24 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:24 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:24 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:24 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:24 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 31
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Send local reply
Dec  2 07:43:24 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:43:24 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:24 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 6 \
                (client=0x8517f0)
Dec  2 07:43:24 vm1 lvm[1332]: unlock_resource: V_vm1-vg lockid: 2
Dec  2 07:43:24 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:24 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: distribute command: XID = 35, flags=0x1 (LOCAL)
Dec  2 07:43:24 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, \
                msg=0x851900, len(, csid=(nil), xid5
Dec  2 07:43:24 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:24 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, \
                msglen (, client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x6 LCK_VG \
                (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:24 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:43:24 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:24 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:24 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:24 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:24 vm1 lvm[1332]: Send local reply
Dec  2 07:43:24 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:43:24 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:24 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 1 \
                (client=0x8517f0)
Dec  2 07:43:24 vm1 lvm[1332]: lock_resource 'V_vg', flags=0, mode=3
Dec  2 07:43:24 vm1 lvm[1332]: lock_resource returning 0, lock_id=2
Dec  2 07:43:24 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:24 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: distribute command: XID = 36, flags=0x1 (LOCAL)
Dec  2 07:43:24 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, \
                msg=0x851930, len$, csid=(nil), xid6
Dec  2 07:43:24 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:24 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, \
                msglen $, client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x1 LCK_VG \
                (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:24 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:43:24 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:24 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:24 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:24 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:24 vm1 lvm[1332]: Send local reply
Dec  2 07:43:24 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:43:24 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:24 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:24 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:24 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:24 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: distribute command: XID = 37, flags=0x0 ()
Dec  2 07:43:24 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:24 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, \
                msg=0x851900, len1, csid=(nil), xid7
Dec  2 07:43:24 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:24 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, \
                msglen 1, client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Syncing device names
Dec  2 07:43:24 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:24 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:24 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:24 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:24 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:24 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:24 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 31
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Send local reply
Dec  2 07:43:24 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:43:24 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:24 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 6 \
                (client=0x8517f0)
Dec  2 07:43:24 vm1 lvm[1332]: unlock_resource: V_vg lockid: 2
Dec  2 07:43:24 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:24 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: distribute command: XID = 38, flags=0x1 (LOCAL)
Dec  2 07:43:24 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, \
                msg=0x851930, len$, csid=(nil), xid8
Dec  2 07:43:24 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:24 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, \
                msglen $, client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x6 LCK_VG \
                (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:24 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:43:24 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:24 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:24 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:24 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:24 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: Send local reply
Dec  2 07:43:24 vm1 lvm[1332]: Read on local socket 5, len = 29
Dec  2 07:43:24 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:43:24 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:43:24 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:24 vm1 lvm[1332]: doing PRE command LOCK_VG 'P_#global' at 6 \
                (client=0x8517f0)
Dec  2 07:43:24 vm1 lvm[1332]: unlock_resource: P_#global lockid: 1
Dec  2 07:43:24 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:43:24 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:43:24 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:24 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: distribute command: XID = 39, flags=0x0 ()
Dec  2 07:43:24 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:43:24 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, \
                msg=0x851900, len), csid=(nil), xid9
Dec  2 07:43:24 vm1 lvm[1332]: process_work_item: local
Dec  2 07:43:24 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851930, \
                msglen ), client=0x8517f0
Dec  2 07:43:24 vm1 lvm[1332]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG \
                (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:43:24 vm1 lvm[1332]: Refreshing context
Dec  2 07:43:24 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:43:24 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 29
Dec  2 07:43:25 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:43:25 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:43:25 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:43:25 vm1 lvm[1332]: Got post command condition...
Dec  2 07:43:25 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:43:25 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:43:25 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:43:25 vm1 lvm[1332]: Send local reply
Dec  2 07:43:25 vm1 lvm[1332]: Read on local socket 5, len = 0
Dec  2 07:43:25 vm1 lvm[1332]: EOF on local socket: inprogress=0
Dec  2 07:43:25 vm1 lvm[1332]: Waiting for child thread
Dec  2 07:43:25 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:43:25 vm1 lvm[1332]: Subthread finished
Dec  2 07:43:25 vm1 lvm[1332]: Joined child thread
Dec  2 07:43:25 vm1 lvm[1332]: ret == 0, errno = 0. removing client
Dec  2 07:43:25 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851900. client=0x8517f0, \
                msg=(nil), len=0, csid=(nil), xid9
Dec  2 07:43:25 vm1 lvm[1332]: process_work_item: free fd -1
Dec  2 07:43:25 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:40 vm1 attrd[1073]:   notice: attrd_trigger_update: Sending flush op to \
                all hosts for: fail-count-vg (<null>)
Dec  2 07:44:40 vm1 crmd[1075]:   notice: do_state_transition: State transition \
S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL \
                origin«ort_transition_graph ]
Dec  2 07:44:40 vm1 attrd[1073]:   notice: attrd_perform_update: Sent delete 16: \
                node84783279, attrúil-count-vg, id=<n/a>, set=(null), section=status
Dec  2 07:44:40 vm1 cib[1070]:   notice: cib:diff: Diff: --- 0.109.15
Dec  2 07:44:40 vm1 cib[1070]:   notice: cib:diff: Diff: +++ 0.110.1 \
                ca82fa1d5517ed5d58fe06dce9b30d67
Dec  2 07:44:40 vm1 pengine[1074]:   notice: unpack_config: On loss of CCM Quorum: \
                Ignore
Dec  2 07:44:40 vm1 pengine[1074]:   notice: LogActions: Start   vg#011(vm1)
Dec  2 07:44:40 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 6: \
                monitor vg_monitor_0 on vm1 (local)
Dec  2 07:44:40 vm1 pengine[1074]:   notice: process_pe_message: Calculated \
                Transition 4: /var/lib/pacemaker/pengine/pe-input-144.bz2
Dec  2 07:44:40 vm1 LVM(vg)[2032]: WARNING: LVM Volume vg is not available (stopped)
Dec  2 07:44:40 vm1 LVM(vg)[2032]: INFO: LVM Volume vg is offline
Dec  2 07:44:40 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation \
                vg_monitor_0 (callB, rc=7, cib-updateB, confirmed=true) not running
Dec  2 07:44:40 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 5: \
                probe_complete probe_complete on vm1 (local) - no waiting
Dec  2 07:44:40 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 23: start \
                vg_start_0 on vm1 (local)
Dec  2 07:44:40 vm1 LVM(vg)[2047]: INFO: Activating volume group vg
Dec  2 07:44:40 vm1 lvm[1332]: Got new connection on fd 5
Dec  2 07:44:40 vm1 lvm[1332]: Read on local socket 5, len = 29
Dec  2 07:44:40 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:44:40 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:44:40 vm1 lvm[1332]: creating pipe, [12, 13]
Dec  2 07:44:40 vm1 lvm[1332]: Creating pre&post thread
Dec  2 07:44:40 vm1 lvm[1332]: Created pre&post thread, state = 0
Dec  2 07:44:40 vm1 lvm[1332]: in sub thread: client = 0x8517f0
Dec  2 07:44:40 vm1 lvm[1332]: doing PRE command LOCK_VG 'P_#global' at 4 \
                (client=0x8517f0)
Dec  2 07:44:40 vm1 lvm[1332]: lock_resource 'P_#global', flags=0, mode=4
Dec  2 07:44:40 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:44:40 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:44:40 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:40 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:40 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:40 vm1 lvm[1332]: distribute command: XID = 40, flags=0x0 ()
Dec  2 07:44:40 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:44:40 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, \
                msg=0x851900, len), csid=(nil), xid@
Dec  2 07:44:40 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:40 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851930, \
                msglen ), client=0x8517f0
Dec  2 07:44:40 vm1 lvm[1332]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG \
                (WRITE|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:40 vm1 lvm[1332]: Refreshing context
Dec  2 07:44:40 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:44:40 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 29
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 1 \
                (client=0x8517f0)
Dec  2 07:44:41 vm1 lvm[1332]: lock_resource 'V_vm1-vg', flags=0, mode=3
Dec  2 07:44:41 vm1 lvm[1332]: lock_resource returning 0, lock_id=2
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 41, flags=0x1 (LOCAL)
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, \
                msg=0x851900, len(, csid=(nil), xidA
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, \
                msglen (, client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x1 LCK_VG \
                (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:41 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:44:41 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:44:41 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 42, flags=0x0 ()
Dec  2 07:44:41 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, \
                msg=0x851900, len1, csid=(nil), xidB
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, \
                msglen 1, client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Syncing device names
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 31
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 5, len = 28
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vm1-vg' at 6 \
                (client=0x8517f0)
Dec  2 07:44:41 vm1 lvm[1332]: unlock_resource: V_vm1-vg lockid: 2
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 43, flags=0x1 (LOCAL)
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851930. client=0x8517f0, \
                msg=0x851900, len(, csid=(nil), xidC
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851970, \
                msglen (, client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_vg: resource 'V_vm1-vg', cmd = 0x6 LCK_VG \
                (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:41 vm1 lvm[1332]: Invalidating cached metadata for VG vm1-vg
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 1 \
                (client=0x8517f0)
Dec  2 07:44:41 vm1 lvm[1332]: lock_resource 'V_vg', flags=0, mode=3
Dec  2 07:44:41 vm1 lvm[1332]: lock_resource returning 0, lock_id=2
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 44, flags=0x1 (LOCAL)
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, \
                msg=0x851930, len$, csid=(nil), xidD
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, \
                msglen $, client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x1 LCK_VG \
                (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:41 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 5, len = 31
Dec  2 07:44:41 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:44:41 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 45, flags=0x0 ()
Dec  2 07:44:41 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, \
                msg=0x851900, len1, csid=(nil), xidE
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x851930, \
                msglen 1, client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Syncing device names
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 31
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 5, len = 24
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 6 \
                (client=0x8517f0)
Dec  2 07:44:41 vm1 lvm[1332]: unlock_resource: V_vg lockid: 2
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 46, flags=0x1 (LOCAL)
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, \
                msg=0x851930, len$, csid=(nil), xidF
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, \
                msglen $, client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x6 LCK_VG \
                (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:41 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 5, len = 29
Dec  2 07:44:41 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:44:41 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: doing PRE command LOCK_VG 'P_#global' at 6 \
                (client=0x8517f0)
Dec  2 07:44:41 vm1 lvm[1332]: unlock_resource: P_#global lockid: 1
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 13
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 47, flags=0x0 ()
Dec  2 07:44:41 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851ed0. client=0x8517f0, \
                msg=0x851900, len), csid=(nil), xidG
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851930, \
                msglen ), client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG \
                (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:41 vm1 lvm[1332]: Refreshing context
Dec  2 07:44:41 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:44:41 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 29
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 12: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x8517f0
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 LVM(vg)[2047]: INFO: Reading all physical volumes. This may take \
a while... Found volume group "vm1-vg" using metadata type lvm2 Found volume group \
                "vg" using metadata type lvm2
Dec  2 07:44:41 vm1 lvm[1332]: Got new connection on fd 15
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 15, len = 24
Dec  2 07:44:41 vm1 lvm[1332]: creating pipe, [16, 17]
Dec  2 07:44:41 vm1 lvm[1332]: Creating pre&post thread
Dec  2 07:44:41 vm1 lvm[1332]: Created pre&post thread, state = 0
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 5, len = 0
Dec  2 07:44:41 vm1 lvm[1332]: EOF on local socket: inprogress=0
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for child thread
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Subthread finished
Dec  2 07:44:41 vm1 lvm[1332]: Joined child thread
Dec  2 07:44:41 vm1 lvm[1332]: ret == 0, errno = 0. removing client
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x8517f0, \
                msg=(nil), len=0, csid=(nil), xidG
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: free fd -1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: in sub thread: client = 0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 1 \
                (client=0x851c70)
Dec  2 07:44:41 vm1 lvm[1332]: lock_resource 'V_vg', flags=0, mode=3
Dec  2 07:44:41 vm1 lvm[1332]: lock_resource returning 0, lock_id=1
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 17
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 48, flags=0x1 (LOCAL)
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851950. client=0x851c70, \
                msg=0x851930, len$, csid=(nil), xidH
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851990, \
                msglen $, client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x1 LCK_VG \
                (READ|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:41 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 15, len = 84
Dec  2 07:44:41 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:44:41 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 17
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 49, flags=0x0 ()
Dec  2 07:44:41 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x852220. client=0x851c70, \
                msg=0x851950, len„, csid=(nil), xidI
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_QUERY (0x34) msg=0x8519b0, \
                msglen „, client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_query: resource \
                'Y9s0qu8doBG5RzmdlanzhUeWenRLq30OwuDvJhYLEnPWcqlIqnearqNGu0l0OcNi', \
                mode -1 (?)
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 84
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 15, len = 84
Dec  2 07:44:41 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:44:41 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 17
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 50, flags=0x0 ()
Dec  2 07:44:41 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x852220. client=0x851c70, \
                msg=0x851950, len„, csid=(nil), xidP
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_QUERY (0x34) msg=0x8519b0, \
                msglen „, client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_query: resource \
                'Y9s0qu8doBG5RzmdlanzhUeWenRLq30OwuDvJhYLEnPWcqlIqnearqNGu0l0OcNi', \
                mode -1 (?)
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 84
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 15, len = 84
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 17
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 51, flags=0x1 (LOCAL)
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x8519b0. client=0x851c70, \
                msg=0x851950, len„, csid=(nil), xidQ
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_LV (0x32) msg=0x852220, \
                msglen „, client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_lv: resource \
'Y9s0qu8doBG5RzmdlanzhUeWenRLq30OwuDvJhYLEnPWcqlIqnearqNGu0l0OcNi', cmd = 0x9d \
                LCK_LV_EXCLUSIVE (EXCL|LV|NONBLOCK|CLUSTER_VG), flags = 0x0 ( ), \
                critical_section = 0
Dec  2 07:44:41 vm1 lvm[1332]: lock_resource \
                'Y9s0qu8doBG5RzmdlanzhUeWenRLq30OwuDvJhYLEnPWcqlIqnearqNGu0l0OcNi', \
                flags=1, mode=5
Dec  2 07:44:41 vm1 lvm[1332]: lock_resource returning 0, lock_id=2
Dec  2 07:44:41 vm1 lvm[1332]: Command return is 0, critical_section is 0
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 15, len = 84
Dec  2 07:44:41 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:44:41 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 17
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 52, flags=0x0 ()
Dec  2 07:44:41 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x852220. client=0x851c70, \
                msg=0x851950, len„, csid=(nil), xidR
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_QUERY (0x34) msg=0x8519b0, \
                msglen „, client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_query: resource \
                'Y9s0qu8doBG5RzmdlanzhUeWenRLq30OwuDvJhYLEnPWcqlIqnearqNGu0l0OcNi', \
                mode 5 (EX)
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 3 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 84
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 15, len = 31
Dec  2 07:44:41 vm1 lvm[1332]: check_all_clvmds_running
Dec  2 07:44:41 vm1 lvm[1332]: down_callback. node 1084783279, state = 3
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 17
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 53, flags=0x0 ()
Dec  2 07:44:41 vm1 lvm[1332]: num_nodes = 1
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x852220. client=0x851c70, \
                msg=0x8519b0, len1, csid=(nil), xidS
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: SYNC_NAMES (0x2d) msg=0x8519e0, \
                msglen 1, client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Syncing device names
Dec  2 07:44:41 vm1 lvm[1332]: Sending message to all cluster nodes
Dec  2 07:44:41 vm1 lvm[1332]: 1084783279 got message from nodeid 1084783279 for 0. \
                len 31
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 15, len = 24
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: doing PRE command LOCK_VG 'V_vg' at 6 \
                (client=0x851c70)
Dec  2 07:44:41 vm1 lvm[1332]: unlock_resource: V_vg lockid: 1
Dec  2 07:44:41 vm1 lvm[1332]: Writing status 0 down pipe 17
Dec  2 07:44:41 vm1 lvm[1332]: Waiting to do post command - state = 0
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: distribute command: XID = 54, flags=0x1 (LOCAL)
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851b60. client=0x851c70, \
                msg=0x84f870, len$, csid=(nil), xidT
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: local
Dec  2 07:44:41 vm1 lvm[1332]: process_local_command: LOCK_VG (0x33) msg=0x851ba0, \
                msglen $, client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: do_lock_vg: resource 'V_vg', cmd = 0x6 LCK_VG \
                (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Dec  2 07:44:41 vm1 lvm[1332]: Invalidating cached metadata for VG vg
Dec  2 07:44:41 vm1 lvm[1332]: Reply from node 40a87aaf: 0 bytes
Dec  2 07:44:41 vm1 lvm[1332]: Got 1 replies, expecting: 1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 lvm[1332]: Got post command condition...
Dec  2 07:44:41 vm1 lvm[1332]: read on PIPE 16: 4 bytes: status: 0
Dec  2 07:44:41 vm1 lvm[1332]: background routine status was 0, sock_client=0x851c70
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for next pre command
Dec  2 07:44:41 vm1 lvm[1332]: Send local reply
Dec  2 07:44:41 vm1 LVM(vg)[2047]: INFO: 1 logical volume(s) in volume group "vg" now \
                active
Dec  2 07:44:41 vm1 lvm[1332]: Read on local socket 15, len = 0
Dec  2 07:44:41 vm1 lvm[1332]: EOF on local socket: inprogress=0
Dec  2 07:44:41 vm1 lvm[1332]: Waiting for child thread
Dec  2 07:44:41 vm1 lvm[1332]: Got pre command condition...
Dec  2 07:44:41 vm1 lvm[1332]: Subthread finished
Dec  2 07:44:41 vm1 lvm[1332]: Joined child thread
Dec  2 07:44:41 vm1 lvm[1332]: ret == 0, errno = 0. removing client
Dec  2 07:44:41 vm1 lvm[1332]: add_to_lvmqueue: cmd=0x851b60. client=0x851c70, \
                msg=(nil), len=0, csid=(nil), xidT
Dec  2 07:44:41 vm1 lvm[1332]: process_work_item: free fd -1
Dec  2 07:44:41 vm1 lvm[1332]: LVM thread waiting for work
Dec  2 07:44:41 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation vg_start_0 \
                (callE, rc=0, cib-updateC, confirmed=true) ok
Dec  2 07:44:41 vm1 crmd[1075]:   notice: te_rsc_command: Initiating action 24: \
                monitor vg_monitor_60000 on vm1 (local)
Dec  2 07:44:41 vm1 crmd[1075]:   notice: process_lrm_event: LRM operation \
                vg_monitor_60000 (callH, rc=0, cib-updateD, confirmedúlse) ok
Dec  2 07:44:41 vm1 crmd[1075]:   notice: run_graph: Transition 4 (Complete=5, \
Pending=0, Fired=0, Skipped=0, Incomplete=0, \
                Source=/var/lib/pacemaker/pengine/pe-input-144.bz2): Complete
Dec  2 07:44:41 vm1 crmd[1075]:   notice: do_state_transition: State transition \
S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL \
origin=notify_crmd ]



_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic