[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gluster-users
Subject:    Re: [Gluster-users] unable to remove brick, pleas help
From:       Alex K <rightkicktech () gmail ! com>
Date:       2017-11-16 5:56:16
Message-ID: CABMULtLWxA7uOiC5rD8+Xu56X480qmXFMVec-9cavXi9nE0qow () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


In case you do not need any data from the brick you may append "force" at
the command, as the error mentions

Alex

On Nov 15, 2017 11:49, "Rudi Ahlers" <rudiahlers@gmail.com> wrote:

> Hi,
>
> I am trying to remove a brick, from a server which is no longer part of
> the gluster pool, but I keep running into errors for which I cannot find
> answers on google.
>
> [root@virt2 ~]# gluster peer status
> Number of Peers: 3
>
> Hostname: srv1
> Uuid: 2bed7e51-430f-49f5-afbc-06f8cec9baeb
> State: Peer in Cluster (Disconnected)
>
> Hostname: srv3
> Uuid: 0e78793c-deca-4e3b-a36f-2333c8f91825
> State: Peer in Cluster (Connected)
>
> Hostname: srv4
> Uuid: 1a6eedc6-59eb-4329-b091-2b9bc6f0834f
> State: Peer in Cluster (Connected)
> [root@virt2 ~]#
>
>
>
>
> [root@virt2 ~]# gluster volume info data
>
> Volume Name: data
> Type: Replicate
> Volume ID: d09e4534-8bc0-4b30-be89-bc1ec2b439c7
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: srv1:/gluster/data/brick1
> Brick2: srv2:/gluster/data/brick1
> Brick3: srv3:/gluster/data/brick1
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.low-prio-threads: 32
> network.remote-dio: enable
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 10000
> features.shard: on
> user.cifs: off
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard-block-size: 512MB
>
>
>
> [root@virt2 ~]# gluster volume remove-brick data replica 2
> srv1:/gluster/data/brick1 start
> volume remove-brick start: failed: Migration of data is not needed when
> reducing replica count. Use the 'force' option
>
>
> [root@virt2 ~]# gluster volume remove-brick data replica 2
> srv1:/gluster/data/brick1 commit
> Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
> volume remove-brick commit: failed: Brick srv1:/gluster/data/brick1 is not
> decommissioned. Use start or force option
>
>
>
> The server virt1 is not part of the cluster anymore.
>
>
>
>
> --
> Kind Regards
> Rudi Ahlers
> Website: http://www.rudiahlers.co.za
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>

[Attachment #5 (text/html)]

<div dir="auto">In case you do not need any data from the brick you may append \
&quot;force&quot; at the command, as the error mentions<div dir="auto"><br></div><div \
dir="auto">Alex</div></div><div class="gmail_extra"><br><div class="gmail_quote">On \
Nov 15, 2017 11:49, &quot;Rudi Ahlers&quot; &lt;<a \
href="mailto:rudiahlers@gmail.com">rudiahlers@gmail.com</a>&gt; wrote:<br \
type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 \
.8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi,  \
<div><br></div><div>I am trying to remove a brick, from a server which is no longer \
part of the gluster pool, but I keep running into errors for which I cannot find \
answers on google.  </div><div><br></div><div><div>[root@virt2 ~]# gluster peer \
status</div><div>Number of Peers: 3</div><div><br></div><div>Hostname: \
srv1</div><div>Uuid: 2bed7e51-430f-49f5-afbc-<wbr>06f8cec9baeb</div><div>State: Peer \
in Cluster (Disconnected)</div><div><br></div><div>Hostname: srv3</div><div>Uuid: \
0e78793c-deca-4e3b-a36f-<wbr>2333c8f91825</div><div>State: Peer in Cluster \
(Connected)</div><div><br></div><div>Hostname: srv4</div><div>Uuid: \
1a6eedc6-59eb-4329-b091-<wbr>2b9bc6f0834f</div><div>State: Peer in Cluster \
(Connected)</div><div>[root@virt2 \
~]#</div><div><br></div><div><br></div><div><br></div><div><br></div><div><div>[root@virt2 \
~]# gluster volume info data</div><div><br></div><div>Volume Name: \
data</div><div>Type: Replicate</div><div>Volume ID: \
d09e4534-8bc0-4b30-be89-<wbr>bc1ec2b439c7</div><div>Status: \
Started</div><div>Snapshot Count: 0</div><div>Number of Bricks: 1 x 3 = \
3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: \
srv1:/gluster/data/brick1</div><div>Brick2: \
srv2:/gluster/data/brick1</div><div>Brick3: \
srv3:/gluster/data/brick1</div><div>Options Reconfigured:</div><div>nfs.disable: \
on</div><div>transport.address-family: inet</div><div>performance.quick-read: \
off</div><div>performance.read-ahead: off</div><div>performance.io-cache: \
off</div><div>performance.low-prio-threads: 32</div><div>network.remote-dio: \
enable</div><div>cluster.eager-lock: enable</div><div>cluster.quorum-type: \
auto</div><div>cluster.server-quorum-type: \
server</div><div>cluster.data-self-heal-<wbr>algorithm: \
full</div><div>cluster.locking-scheme: granular</div><div>cluster.shd-max-threads: \
8</div><div>cluster.shd-wait-qlength: 10000</div><div>features.shard: \
on</div><div>user.cifs: off</div><div>storage.owner-uid: \
36</div><div>storage.owner-gid: 36</div><div>features.shard-block-size: \
512MB</div></div><div><br></div><div><br></div><div><br></div><div><div>[root@virt2 \
~]# gluster volume remove-brick data replica 2 srv1:/gluster/data/brick1 \
start</div><div>volume remove-brick start: failed: Migration of data is not needed \
when reducing replica count. Use the &#39;force&#39; \
option</div></div><div><br></div><div><br></div><div><div>[root@virt2 ~]# gluster \
volume remove-brick data replica 2 srv1:/gluster/data/brick1 \
commit</div><div>Removing brick(s) can result in data loss. Do you want to Continue? \
(y/n) y</div><div>volume remove-brick commit: failed: Brick srv1:/gluster/data/brick1 \
is not decommissioned. Use start or force \
option</div></div><div><br></div><div><br></div><div><br></div><div>The server virt1 \
is not part of the cluster anymore.  \
</div><div><br></div><div><br></div><div><br></div><div><br></div>-- <br><div \
class="m_3513555631221821626gmail_signature">Kind Regards<br>Rudi Ahlers<br>Website: \
<a href="http://www.rudiahlers.co.za" \
target="_blank">http://www.rudiahlers.co.za</a><br></div> </div></div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" \
target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div></div>




_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic