[prev in list] [next in list] [prev in thread] [next in thread] 

List:       cassandra-user
Subject:    Re: Adding an IPv6-only server to a dual-stack cluster
From:       Lapo Luchini <lapo () lapo ! it>
Date:       2022-11-18 9:12:47
Message-ID: tl7iag$pf1$1 () ciao ! gmane ! io
[Download RAW message or body]

So basically listen_address=:: (which should accept both IPv4 and IPv6) 
is fine, as long as broadcast_address reports the same single IPv4 
address that the node always reported previously?

The presence of broadcast_address removes the "different nodes in the 
cluster pick different addresses for you" case?

On 2022-11-16 14:03, Bowen Song via user wrote:
> I would expect that you'll need NAT64 in order to have a cluster with 
> mixed nodes between IPv6-only servers and dual-stack servers that's 
> broadcasting their IPv4 addresses. Once all IPv4-broadcasting dual-stack 
> nodes are replaced with nodes either IPv6-only or dual-stack but 
> broadcasting IPv6 instead, the NAT64 can be removed.
> 
> 
> On 09/11/2022 17:27, Lapo Luchini wrote:
>> I have a (3.11) cluster running on IPv4 addresses on a set of 
>> dual-stack servers; I'd like to add a new IPv6-only server to the 
>> cluster… is it possible to have the dual-stack ones answer on IPv6 
>> addresses as well (while keeping the single IPv4 address as 
>> broadcast_address, I guess)?
>>
>> This sentence in cassandra.yaml suggests it's impossible:
>>
>>     Setting listen_address to 0.0.0.0 is always wrong.
>>
>> FAQ #1 also confirms that (is this true also with broadcast_address?):
>>
>>     if different nodes in the cluster pick different addresses for you,
>>     Bad Things happen.
>>
>> Is it possible to do this, or is my only chance to shutdown the entire 
>> cluster and launch it again as IPv6-only?
>> (IPv6 is available on each and every host)
>>
>> And even in that case, is it possible for a cluster to go down from a 
>> set of IPv4 address and be recovered on a parallel set of IPv6 
>> addresses? (I guess gossip does not expect that)
>>
>> thanks in advance for any suggestion,
>>
> 


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic