[prev in list] [next in list] [prev in thread] [next in thread]
List: ceph-users
Subject: [ceph-users] Change both client/cluster network subnets
From: nasospan84 () hotmail ! com (Nasos Pan)
Date: 2015-11-26 14:14:03
Message-ID: DUB122-W40C603DA22D1981EEA2117DC040 () phx ! gbl
[Download RAW message or body]
Hi guys. For some months i had a simple working ceph cluster with 3 nodes and 3 \
monitors inside. Client, monitors and cluster network was at redundant 10Gbps ports \
in the same subnet 10.10.10.0/24.
Here is the conf
#########
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
auth supported = cephx
cluster network = 10.0.0.0/24
filestore xattr use omap = true
fsid = 169d99bb-9b62-459e-8e5e-2d101a8c17b2
keyring = /etc/pve/priv/$cluster.$name.keyring
osd journal size = 5120
osd pool default min size = 1
public network = 10.0.0.0/24
osd mount options xfs = rw,noatime,inode64
[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
[mon.1]
host = test2
mon addr = 10.0.0.2:6789
[mon.0]
host = test1
mon addr = 10.0.0.1:6789
[mon.2]
host = test3
mon addr = 10.0.0.3:6789
###########
I want for several reasons to change all the configuration at another subnet, \
172.16.0.0/24 I can stop all io traffic to the cluster, scrubbing, snapshots backups \
and everything that would cause changes.(anything else?) Just to be sure i have scrub \
deep-scrub off and osd noout.
What is the best way to change this setting? I don't mind rebooting if necessary.
Apparently just change 10.10.10 to 172.16.0 and restart services one by one (either \
osds first either mons) didn't worked.
Any help? As long as i have osd noout can i just stop all osds, stop mons and then \
start one by one again?
Thanks!
Nasos Pan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20151126/c3ca7076/attachment.htm>
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic