[prev in list] [next in list] [prev in thread] [next in thread] 

List:       spread-users
Subject:    Re: [Spread-users] Stuck Spread Daemons
From:       John Schultz <jschultz () spreadconcepts ! com>
Date:       2011-11-17 18:55:43
Message-ID: 1CF97FCB-E96B-4DAE-A514-351DCE6DE8BA () spreadconcepts ! com
[Download RAW message or body]

[Attachment #2 (multipart/signed)]


Alternatively, if you have dangerous monitor enabled, then you could use spmonitor to \
create a partition with each surviving daemon in its own partition.  That will allow \
all the daemons to "accept" the loss of the packet and "move on with their lives."  \
But this would require manual intervention each time the condition reared its head \
again.

Another thing to try would be to put each daemon in its segment.  It may be that your \
broadcast address isn't working properly but point-to-point is fine.  If that is the \
case, then putting every daemon in their own segment might allow them to function \
properly although likely with more communication overhead.

Cheers!

-----
John Lane Schultz
Spread Concepts LLC
Phn: 301 830 8100
Cell: 443 838 2200

On Nov 17, 2011, at 1:47 PM, John Schultz wrote:

> If I understand correctly the token would rotate (for example) A->B->C->D based on \
> the order in the spread configuration file.  If B was missing a packet it would add \
> that to the token and pass it to C.  C would then send it back to B, D would have \
> no idea this was happening.

Right.  C would try to retransmit the packet that B is missing and then remove the \
retransmission request from the token so when D (and later daemons in the ring) gets \
the token it has no clue that anything was wrong for B in the first place.  Since \
there is a 0% chance of C successfully recovering the packet for B this condition \
will persist in a tight loop and eventually flow control will not allow the rest of \
the system to make forward progress.  It will appear frozen from a client's \
perspective even while the token circulates and retransmissions continue at a \
frantic, but pointless, pace.

> You say segment or point-to-point, does that mean a broadcast may be used for the \
> retransmission rather than a UDP packet?

Yes.  A segment "broadcast" retransmission can occur.  I believe the logic is if more \
than one daemon mark a packet as being missed on the token before it is retransmitted \
=> segment retransmission.  If only a single daemon has marked a packet as being \
missed, then a retransmission will be unicast, I believe.

> In the meantime is there anything I can do to stop it escalating as it does?  A log \
> and drop threshold would be useful, although lost messages aren't great I'd rather \
> have a few dropped than everything jam.

I don't believe there is any easy fix as this goes against the core purpose and \
semantics of Spread.  The simplest thing I could see you doing would be to have a \
daemon remember how many times it has requested retransmission of a packet and if \
it's done it too many times to just stop requesting and ACK it instead.  Even that \
would probably lead to all sorts of system invariants being violated and isn't as \
simple as it sounds.

Cheers!

-----
John Lane Schultz
Spread Concepts LLC
Phn: 301 830 8100
Cell: 443 838 2200

On Nov 17, 2011, at 1:24 PM, Melissa Jenkins wrote:

This could be possible as we have a L2 WAN, so it's possible that A+B can see each \
other but not C+D.  

I believe it is packet loss between (A+B) and (C+D) that is causing the problem.

If I understand correctly the token would rotate (for example) A->B->C->D based on \
the order in the spread configuration file.  If B was missing a packet it would add \
that to the token and pass it to C.  C would then send it back to B, D would have no \
idea this was happening.  

You say segment or point-to-point, does that mean a broadcast may be used for the \
retransmission rather than a UDP packet?

I've now got lots of pings and tcpdumps of the arps to see if there is anything \
obvious there so perhaps that will identify the culprit.  Though I can't imagine a \
failed arp renewal to persist for very long, and once the system goes into meltdown \
it just gets worse and worse (also we don't drop the meshed BGP during these bursts \
so there is two way comms on the same LAN).  However, I could see an arp renewal \
failure causing asymmetric comms though.  I will also build spsend & sprecv as well

In the meantime is there anything I can do to stop it escalating as it does?  A log \
and drop threshold would be useful, although lost messages aren't great I'd rather \
have a few dropped than everything jam.

Thanks!
Mel

On 17 Nov 2011, at 15:56, John Schultz wrote:

> Another possibility is that you have a pathological communication condition in your \
> network where some of the daemons can never successfully send to some of the other \
> daemons. 
> Let's say daemon X can't send to daemon Y, through the segment address nor \
> point-to-point, due to something like a firewall or a network black hole.  If it \
> then arises that daemon Y misses some data packet and asks for retransmission and \
> it happens that X is always the first daemon to see the request (on the token) and \
> has the packet, then it will try to retransmit it and remove the request from the \
> token.  Because the data packet never makes it to Y, Y will re-request it the next \
> time it gets the token and round and round we go.  Forward progress of other data \
> packets will eventually stop and the daemons will simply loop "forever" trying to \
> recover the missed packet. 
> Typically, this would require some sort of bizarre asymmetric communication \
> condition where Y can successfully send to X but X cannot successfully send to Y. 
> To rule out this possibility I recommend you build the spsend and sprecv programs \
> contained in the daemon directory and then go through all the permutations in your \
> network to ensure that (a) all daemons within the segment can successfully send to \
> every other daemon other through the segment address and (b) all daemons within the \
> segment can successfully send to every other daemon through their IP address.  Make \
> sure you instruct these programs to use the same addresses and ports that you use \
> in your configuration file.  You should also check that "port + 1" works for \
> point-to-point communication too. 
> To build sprecv successfully you will probably have to edit its Makefile line to \
> also link with events.o and memory.o like the spsend line does. 
> Cheers!
> 
> -----
> John Lane Schultz
> Spread Concepts LLC
> Phn: 301 830 8100
> Cell: 443 838 2200
> 
> On Nov 17, 2011, at 9:34 AM, Melissa Jenkins wrote:
> 
> Well, it turns out that it isn't solely membership related.
> 
> It happened several times yesterday, and again this morning.  I tried removing the \
> spread daemon from the second segment, and getting the processes on that box to \
> connect directly to a daemon in the main segment. 
> Unfortunately this has had no effect on it whatsoever :(	
> 
> Behaviour was as mentioned before - large spike in retransmissions, with some \
> daemons not receiving any messages.  The sender gets stuck and stops sending \
> messages and seems to hang for reasonably long period when I restart the spread \
> daemon. 
> spmonitor says it's sretrans that are incrementing.
> 
> If the daemon that has a high number of retransmits is killed and restarted \
> everything seems to return to normal.  Sometimes you get a couple of daemons stuck \
> retransmitting. 
> One time there was nothing in the log at all, just spmonitor checks.  It may be the \
> logging happens after it has been stuck for a while, rather than at the beginning. 
> I'll have a look through the code and see if I can figure out where to start \
> looking. 
> Mel
> 
> On 15 Nov 2011, at 17:12, John Schultz wrote:
> 
> > Hi Melissa,
> > 
> > I believe this is likely a bug.  There is at least one bug in the core membership \
> > + EVS protocol of Spread that rarely bites people when they get into extreme \
> > situations.  Unfortunately, I don't see any easy way to debug this as it is \
> > likely a logic bug in the membership + EVS protocol of Spread, which is quite \
> > complex.  I will put this on the list of currently reported bugs and try to take \
> > at the core logic of the protocol to see if I can spot the issue(s) that people \
> > have been reporting. 
> > Cheers!
> > 
> > -----
> > John Lane Schultz
> > Spread Concepts LLC
> > Phn: 301 830 8100
> > Cell: 443 838 2200
> > 
> > On Nov 15, 2011, at 11:02 AM, Melissa Jenkins wrote:
> > 
> > 
> > Over time we've noticed if we get a burst of high packet loss we often end up \
> > with Spread daemons in a stuck stage where they are constantly retransmitting.  \
> > Today, the logs indicated that the packet loss coincided with a few joins + \
> > leaves, though I don't have enough information to indicate that this anything \
> > more than a coincidence. 
> > Im a bit at a loss as to how to trouble shoot this, all suggestions gratefully \
> > received!  (I have tried turning up logging detail but it was just exhausting the \
> > disks and not showing anything more helpful) 
> > Thanks,
> > Mel
> > 
> > Basic configuration is:
> > 
> > Segment 1: 4 spread daemons, many users, all on a metro ethernet L2 lan.  \
> > Configured to use broadcast Segment 2: 1 spread daemon, several 'users', \
> > connected via a VPN to Segment 1. 
> > It looks like the packet loss was in Segment 1, however, to unstick the Spread \
> > cluster we had to restart the daemon in Segment 2 (192.168.0.1). 
> > This has happened several times always when other monitoring indicates high \
> > packet loss somewhere in the network, though often just when it is between \
> > segment1 & 2.  
> > spmonitor statistics show a HUGE burst of retransmissions  14 million per second \
> > according to my monitoring, and we have a large spike in the bandwidth between \
> > services.  This burst/spike continues until the stuck daemon is restarted.  Our \
> > normal load peaks at ~60 messages a second. 
> > Some messages were still being passed, during this period, though I'm not sure if \
> > all hosts had received them. 
> > Here are some excerpts from the log, Im not 100% sure they are totally relevant \
> > but it is important to note that message passing started to fail at about 09:12, \
> > the massive burst in retransmissions didnt occur till closer to 09:30.  Two \
> > daemons in segment 1 transmit, 4 receive  one of the receivers show a big \
> > increase in the number of messages ~3000/second, the other shows a decrease.  The \
> > two transmitting show the massive spike in retransmissions.  The daemon in \
> > segment one both transmits & receives and it shows the same large spike.  \
> > According to spmonitor all three active transmitters have a large number of sent, \
> > received and retransmitted messages. 
> > Log from 192.168.0.1 (seg2a1):
> > 
> > [Tue 15 Nov 2011 09:30:10] Stat_handle_message: sent status to Monitor at seg2a1
> > [Tue 15 Nov 2011 09:31:54] Memb_handle_message: handling join message from \
> > 192.168.0.139, State is 1 [Tue 15 Nov 2011 09:31:54] Handle_join in OP
> > [Tue 15 Nov 2011 09:31:54] Memb_token_loss: I lost my token, state is 1
> > [Tue 15 Nov 2011 09:31:54] Scast_alive: State is 2
> > [Tue 15 Nov 2011 09:31:54] Memb_handle_message: handling join message from \
> > 192.168.0.203, State is 2 [Tue 15 Nov 2011 09:31:55] Scast_alive: State is 2
> > [Tue 15 Nov 2011 09:31:55] Memb_handle_message: handling join message from \
> > 192.168.0.139, State is 2 [Tue 15 Nov 2011 09:31:55] Memb_handle_message: \
> > handling join message from 192.168.0.203, State is 2 [Tue 15 Nov 2011 09:31:56] \
> > Send_join: State is 4 [Tue 15 Nov 2011 09:31:56] Memb_handle_message: handling \
> > join message from 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:31:56] \
> > Memb_handle_message: handling join message from 192.168.0.203, State is 4 [Tue 15 \
> > Nov 2011 09:31:57] Send_join: State is 4 [Tue 15 Nov 2011 09:31:57] \
> > Memb_handle_message: handling join message from 192.168.0.139, State is 4 [Tue 15 \
> > Nov 2011 09:31:57] Memb_handle_message: handling join message from 192.168.0.203, \
> > State is 4 [Tue 15 Nov 2011 09:31:58] Send_join: State is 4
> > [Tue 15 Nov 2011 09:31:58] Memb_handle_message: handling join message from \
> > 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:31:58] Memb_handle_message: \
> > handling join message from 192.168.0.203, State is 4 [Tue 15 Nov 2011 09:31:59] \
> > Send_join: State is 4 [Tue 15 Nov 2011 09:32:00] Send_join: State is 4
> > [Tue 15 Nov 2011 09:32:01] Send_join: State is 4
> > [Tue 15 Nov 2011 09:32:02] Send_join: State is 4
> > [Tue 15 Nov 2011 09:32:03] Form_or_fail: failed to gather
> > [Tue 15 Nov 2011 09:32:03] Send_join: State is 4
> > [Tue 15 Nov 2011 09:32:04] Send_join: State is 4
> > [Tue 15 Nov 2011 09:32:05] Send_join: State is 4
> > [Tue 15 Nov 2011 09:32:06] Memb_handle_message: handling join message from \
> > 192.168.0.203, State is 4 [Tue 15 Nov 2011 09:32:06] Send_join: State is 4
> > [Tue 15 Nov 2011 09:32:06] Memb_handle_message: handling refer message from \
> > 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:32:06] Handle_refer in GATHER
> > [Tue 15 Nov 2011 09:32:07] Memb_handle_message: handling join message from \
> > 192.168.0.203, State is 4 [Tue 15 Nov 2011 09:32:07] Send_join: State is 4
> > [Tue 15 Nov 2011 09:32:07] Memb_handle_message: handling refer message from \
> > 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:32:07] Handle_refer in GATHER
> > [Tue 15 Nov 2011 09:32:08] Memb_handle_message: handling join message from \
> > 192.168.0.203, State is 4 [Tue 15 Nov 2011 09:32:08] Send_join: State is 4
> > [Tue 15 Nov 2011 09:32:08] Memb_handle_message: handling refer message from \
> > 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:32:08] Handle_refer in GATHER
> > [Tue 15 Nov 2011 09:32:09] Memb_handle_message: handling join message from \
> > 192.168.0.203, State is 4 [Tue 15 Nov 2011 09:32:09] Send_join: State is 4
> > [Tue 15 Nov 2011 09:32:09] Memb_handle_message: handling refer message from \
> > 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:32:09] Handle_refer in GATHER
> > [Tue 15 Nov 2011 09:32:10] Memb_handle_message: handling join message from \
> > 192.168.0.203, State is 4 [Tue 15 Nov 2011 09:32:10] Send_join: State is 4
> > [Tue 15 Nov 2011 09:32:10] Memb_handle_message: handling refer message from \
> > 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:32:10] Handle_refer in GATHER
> > [Tue 15 Nov 2011 09:32:11] Memb_handle_token: handling form1 token
> > [Tue 15 Nov 2011 09:32:11] Handle_form1 in GATHER
> > [Tue 15 Nov 2011 09:32:11] Memb_handle_token: handling form1 token
> > [Tue 15 Nov 2011 09:32:11] Handle_form1 in FORM
> > [Tue 15 Nov 2011 09:32:11] Memb_handle_token: handling form2 token
> > [Tue 15 Nov 2011 09:32:11] Handle_form2 in FORM
> > [Tue 15 Nov 2011 09:32:11] Memb_handle_token: handling form2 token
> > [Tue 15 Nov 2011 09:32:11] Handle_form2 in EVS
> > [Tue 15 Nov 2011 09:32:11] Memb_transitional
> > [Tue 15 Nov 2011 09:32:11] G_handle_trans_memb:  with (192.168.0.203, 1321349531) \
> > id [Tue 15 Nov 2011 09:32:11] G_handle_trans_memb in GOP
> > 
> > 
> > Log from the leader, which is in segment 1 (192.168.0.203):
> > [Tue 15 Nov 2011 09:30:25] Send_join: State is 4
> > [Tue 15 Nov 2011 09:31:26] Send_join: State is 4
> > [Tue 15 Nov 2011 09:31:27] Send_join: State is 4
> > [Tue 15 Nov 2011 09:31:28] Send_join: State is 4
> > [Tue 15 Nov 2011 09:31:29] Send_join: State is 4
> > [Tue 15 Nov 2011 09:31:30] Send_join: State is 4
> > [Tue 15 Nov 2011 09:31:52] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:52] Handle_alive in OP
> > [Tue 15 Nov 2011 09:31:52] Memb_token_loss: I lost my token, state is 1
> > [Tue 15 Nov 2011 09:31:52] Scast_alive: State is 2
> > [Tue 15 Nov 2011 09:31:52] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:52] Handle_alive in SEG
> > [Tue 15 Nov 2011 09:31:52] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:52] Handle_alive in SEG
> > [Tue 15 Nov 2011 09:31:53] Scast_alive: State is 2
> > [Tue 15 Nov 2011 09:31:53] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:53] Handle_alive in SEG
> > [Tue 15 Nov 2011 09:31:53] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:53] Handle_alive in SEG
> > [Tue 15 Nov 2011 09:31:53] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:53] Handle_alive in SEG
> > [Tue 15 Nov 2011 09:31:54] Send_join: State is 4
> > [Tue 15 Nov 2011 09:31:54] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:54] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:54] Memb_handle_message: handling join message from \
> > 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:31:54] Memb_handle_message: \
> > handling alive message [Tue 15 Nov 2011 09:31:54] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:54] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:54] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:54] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:54] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:55] Send_join: State is 4
> > [Tue 15 Nov 2011 09:31:55] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:55] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:55] Memb_handle_message: handling join message from \
> > 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:31:55] Memb_handle_message: \
> > handling alive message [Tue 15 Nov 2011 09:31:55] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:55] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:55] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:55] Memb_handle_message: handling join message from \
> > 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:31:55] Memb_handle_message: \
> > handling alive message [Tue 15 Nov 2011 09:31:55] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:56] Send_join: State is 4
> > [Tue 15 Nov 2011 09:31:56] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:56] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:56] Memb_handle_message: handling join message from \
> > 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:31:56] Memb_handle_message: \
> > handling join message from 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:31:56] \
> > Memb_handle_message: handling alive message [Tue 15 Nov 2011 09:31:56] \
> > Handle_alive in GATHER [Tue 15 Nov 2011 09:31:56] Memb_handle_message: handling \
> > alive message [Tue 15 Nov 2011 09:31:56] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:56] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:56] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:56] Memb_handle_message: handling join message from \
> > 192.168.0.1, State is 4 [Tue 15 Nov 2011 09:31:57] Send_join: State is 4
> > [Tue 15 Nov 2011 09:31:57] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:57] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:57] Memb_handle_message: handling join message from \
> > 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:31:57] Memb_handle_message: \
> > handling alive message [Tue 15 Nov 2011 09:31:57] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:57] Memb_handle_message: handling join message from \
> > 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:31:57] Memb_handle_message: \
> > handling alive message [Tue 15 Nov 2011 09:31:57] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:57] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:57] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:57] Memb_handle_message: handling join message from \
> > 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:31:57] Memb_handle_message: \
> > handling alive message [Tue 15 Nov 2011 09:31:57] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:57] Memb_handle_message: handling join message from \
> > 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:31:57] Memb_handle_message: \
> > handling alive message [Tue 15 Nov 2011 09:31:57] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:57] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:57] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:57] Memb_handle_message: handling join message from \
> > 192.168.0.1, State is 4 [Tue 15 Nov 2011 09:31:58] Send_join: State is 4
> > [Tue 15 Nov 2011 09:31:58] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:58] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:58] Memb_handle_message: handling join message from \
> > 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:31:58] Memb_handle_message: \
> > handling join message from 192.168.0.139, State is 4 [Tue 15 Nov 2011 09:31:58] \
> > Memb_handle_message: handling alive message [Tue 15 Nov 2011 09:31:58] \
> > Handle_alive in GATHER [Tue 15 Nov 2011 09:31:58] Memb_handle_message: handling \
> > alive message [Tue 15 Nov 2011 09:31:58] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:58] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:31:58] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:31:58] Memb_handle_message: handling join message from \
> > 192.168.0.1, State is 4 [Tue 15 Nov 2011 09:31:59] Memb_handle_token: handling \
> > form1 token [Tue 15 Nov 2011 09:31:59] Handle_form1 in FORM
> > [Tue 15 Nov 2011 09:31:59] Memb_handle_token: handling form1 token
> > [Tue 15 Nov 2011 09:31:59] Handle_form1 in FORM
> > [Tue 15 Nov 2011 09:31:59] Memb_handle_message: handling join message from \
> > 192.168.0.1, State is 5 [Tue 15 Nov 2011 09:32:00] Memb_handle_message: handling \
> > join message from 192.168.0.1, State is 5 [Tue 15 Nov 2011 09:32:01] \
> > Memb_handle_message: handling join message from 192.168.0.1, State is 5 [Tue 15 \
> > Nov 2011 09:32:02] Memb_handle_message: handling join message from 192.168.0.1, \
> > State is 5 [Tue 15 Nov 2011 09:32:03] Memb_handle_message: handling join message \
> > from 192.168.0.1, State is 5 [Tue 15 Nov 2011 09:32:04] Memb_token_loss: I lost \
> > my token, state is 5 [Tue 15 Nov 2011 09:32:04] Scast_alive: State is 2
> > [Tue 15 Nov 2011 09:32:04] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:04] Handle_alive in SEG
> > [Tue 15 Nov 2011 09:32:04] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:04] Handle_alive in SEG
> > [Tue 15 Nov 2011 09:32:04] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:04] Handle_alive in SEG
> > [Tue 15 Nov 2011 09:32:04] Memb_handle_message: handling join message from \
> > 192.168.0.1, State is 2 [Tue 15 Nov 2011 09:32:05] Scast_alive: State is 2
> > [Tue 15 Nov 2011 09:32:05] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:05] Handle_alive in SEG
> > [Tue 15 Nov 2011 09:32:05] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:05] Handle_alive in SEG
> > [Tue 15 Nov 2011 09:32:05] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:05] Handle_alive in SEG
> > [Tue 15 Nov 2011 09:32:05] Memb_handle_message: handling join message from \
> > 192.168.0.1, State is 2 [Tue 15 Nov 2011 09:32:06] Send_join: State is 4
> > [Tue 15 Nov 2011 09:32:06] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:06] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:32:06] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:06] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:32:06] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:06] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:32:06] Memb_handle_message: handling join message from \
> > 192.168.0.1, State is 4 [Tue 15 Nov 2011 09:32:07] Send_join: State is 4
> > [Tue 15 Nov 2011 09:32:07] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:07] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:32:07] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:07] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:32:07] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:07] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:32:07] Memb_handle_message: handling join message from \
> > 192.168.0.1, State is 4 [Tue 15 Nov 2011 09:32:08] Send_join: State is 4
> > [Tue 15 Nov 2011 09:32:08] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:08] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:32:08] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:08] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:32:08] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:08] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:32:08] Memb_handle_message: handling join message from \
> > 192.168.0.1, State is 4 [Tue 15 Nov 2011 09:32:09] Send_join: State is 4
> > [Tue 15 Nov 2011 09:32:09] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:09] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:32:09] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:09] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:32:09] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:09] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:32:09] Memb_handle_message: handling join message from \
> > 192.168.0.1, State is 4 [Tue 15 Nov 2011 09:32:10] Send_join: State is 4
> > [Tue 15 Nov 2011 09:32:10] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:10] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:32:10] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:10] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:32:10] Memb_handle_message: handling alive message
> > [Tue 15 Nov 2011 09:32:10] Handle_alive in GATHER
> > [Tue 15 Nov 2011 09:32:10] Memb_handle_message: handling join message from \
> > 192.168.0.1, State is 4 [Tue 15 Nov 2011 09:32:11] Memb_handle_token: handling \
> > form2 token [Tue 15 Nov 2011 09:32:11] Handle_form2 in FORM
> > [Tue 15 Nov 2011 09:32:11] Memb_handle_token: handling form2 token
> > [Tue 15 Nov 2011 09:32:11] Handle_form2 in EVS
> > [Tue 15 Nov 2011 09:32:11] Memb_transitional
> > [Tue 15 Nov 2011 09:32:11] G_handle_trans_memb:  with (192.168.0.203, 1321349531) \
> > id [Tue 15 Nov 2011 09:32:11] G_handle_trans_memb in GOP
> > [Tue 15 Nov 2011 09:32:11] G_handle_trans_memb: Synced Set (with 4 members):
> > 
> > 
> > _______________________________________________
> > Spread-users mailing list
> > Spread-users@lists.spread.org
> > http://lists.spread.org/mailman/listinfo/spread-users
> > 
> > _______________________________________________
> > Spread-users mailing list
> > Spread-users@lists.spread.org
> > http://lists.spread.org/mailman/listinfo/spread-users
> 
> 
> _______________________________________________
> Spread-users mailing list
> Spread-users@lists.spread.org
> http://lists.spread.org/mailman/listinfo/spread-users
> 


_______________________________________________
Spread-users mailing list
Spread-users@lists.spread.org
http://lists.spread.org/mailman/listinfo/spread-users

_______________________________________________
Spread-users mailing list
Spread-users@lists.spread.org
http://lists.spread.org/mailman/listinfo/spread-users


["smime.p7s" (smime.p7s)]

0	*H
 010	+0	*H
 
0m0U Fuc.	76>A0
	*H
010	UUS10U
U.S. Government10
UECA1"0 UCertification Authorities1>0<U5VeriSign Client External \
Certification Authority - G20 100323000000Z
130322235959Z010	UUS10U
U.S. Government10
UECA10UVeriSign, Inc.10USpread Concepts LLC10UJohn \
Schultz0"0 	*H
0
yEx9`
wQ\ F]nټ6?,5!]-AȣYM7%z$ ~.,T JSKFxL(
6  Vw
h.?#ud?IP\fAmߞMi+Z4W
80sDKi?e͝)-12PRiqb-]*25i9rWX"gz0c?o/pT'"9u9ϻ_)@eeEpyAƿ00QUJ0H0F \
D B@http://eca-client-crl.verisign.com/VeriSignECA2048/LatestCRL.crl0U0U:)qG0U#0
 O "P\
!Kr(0&U0jschultz@spreadconcepts.com0+t0r0?+03https:// \
eca2048.verisign.com/CA/VeriSignECA2048.cer0/+0#http://eca-client-ocsp.verisign.com0RU \
K0I0G `He0907++https://www.verisign.com/repository/eca/cps0U	00+	1US0
 	*H
;u*)}L-'xoR=vcxVhM`$4C$.#lg-j1:]13|)8+@2aGlVTD@ \
xVAt9o@>HhxpQ3Rjh|F!'vOV?Iפ2;yBe=ZNQOH3u \
3Oz5)S,UQd P0m0U i<T:	#yh00 	*H
010	UUS10U
U.S. Government10
UECA1"0 UCertification Authorities1>0<U5VeriSign Client External \
Certification Authority - G20 100323000000Z
130322235959Z010	UUS10U
U.S. Government10
UECA10UVeriSign, Inc.10USpread Concepts LLC10UJohn \
Schultz0"0 	*H
0
ڞ(6*򳜎m\yw칊S0P mybB;9HQw~zRaC_m<
m`RƬDm{`egcSZv
;Ljcc*~cUGp'}0QuPEZKƼ*Ptg=U=oBD[|j{f%RjLExn?1c#^xyDT> \
0 P.re.b_a00QUJ0H0F D \
B@http://eca-client-crl.verisign.com/VeriSignECA2048/LatestCRL.crl0U \
0UcO'2k?0U#0 O "P\
!Kr(0&U0jschultz@spreadconcepts.com0+t0r0?+03https:// \
eca2048.verisign.com/CA/VeriSignECA2048.cer0/+0#http://eca-client-ocsp.verisign.com0RU \
K0I0G `He0907++https://www.verisign.com/repository/eca/cps0U	00+	1US0
 	*H
h:fq4ϓȲ \
v%Qf}@D|èAr?INJEuzUiF;|:7		LX
 ^}+H[s;M
X6
/㶪;n+j.W!0*>e|1]Q㿬;]i
2ێv]#X#EqE'zl3ͦLv
\M5qqbrxaQl100010	UUS10U
U.S. Government10
UECA1"0 UCertification Authorities1>0<U5VeriSign Client External \
Certification Authority - G2Fuc.	76>A0	+ 0	*H 	1	*H
0	*H
	1
111117185544Z0#	*H
	1|2}ZS0	+710010	UUS10U
U.S. Government10
UECA1"0 UCertification Authorities1>0<U5VeriSign Client External \
Certification Authority - G2i<T:	#yh00*H 	1 \
010	UUS10U U.S. Government10
UECA1"0 UCertification Authorities1>0<U5VeriSign Client External \
Certification Authority - G2i<T:	#yh00 	*H
b*q~"9}Zy#&'="-XOKfe r.$C\Gtz \
ƿ2qG>Z0f625nL4vw@ֆ$HBt9|p=|%{%\>5z|n:>} \
@<^ɶMuak9Dj!kk;!>R҃ =<{J \
\+gI#VkGpXd



_______________________________________________
Spread-users mailing list
Spread-users@lists.spread.org
http://lists.spread.org/mailman/listinfo/spread-users


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic