[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gluster-users
Subject:    Re: [Gluster-users] Re-Add Distributed Volume Volume
From:       "Taste-Of-IT" <kontakt () taste-of-it ! de>
Date:       2021-10-27 8:26:02
Message-ID: 03d03fd139070e325537940f83519e79d598b6c7 () taste-of-it ! de
[Download RAW message or body]

Hi,

the problem is: i lost one node because of bad filesystem on root partition. i could \
restore that and i could reuse the vol1. I reinstalled the os and create a new vol2. \
After that i moved direct from node1 and node2 into new vol2 via nfs mount. I assume \
that if i only move files, the size of all disks stay the same, but the size rises \
and from free 6TB i now have only 200GB free of disk space. So i assume the files \
where not moved and only copied.

Hope that helps

thx

Am 25.10.2021 21:04:10, schrieb Strahil Nikolov:
> To be honest , I can't imagine the problem actually.
> 
> When you reuse bricks you have two options:
> 1. Recreate the filesystem. It's simpler and easier
> 2. Do the following:
> Delete all previously existing data in the brick, including the .glusterfs \
> subdirectory. Run # setfattr -x trusted.glusterfs.volume-id brick and # setfattr -x \
> trusted.gfid brick to remove the attributes from the root of the brick. Run # \
> getfattr -d -m . brick to examine the attributes set on the volume. Take note of \
> the attributes. Run # setfattr -x attribute brick to remove the attributes relating \
> to the glusterFS file system. The trusted.glusterfs.dht attribute for a distributed \
> volume is one such example of attributes that need to be removed. It is necessary \
> to remove the extended attributes `trusted.gfid` and `trusted.glusterfs.volume-id` \
> which are unique for every Gluster brick. These attributes are created the first \
> time a brick gets added to a volume. 
> As you still have a ".glusterd" you didn't reintegrate the brick.
> 
> The only other option I know is to use add-brick with the "force" option.
> 
> Can you provide a short summary (commands only) of how the issue happened, what you \
> did and what error is coming up ? 
> 
> Best Regards,
> Strahil Nikolov 
> 
> 
> 
> 
> 
> 
> В сряда, 20 октомври 2021 г., 14:06:29 ч. Гринуич+3, Taste-Of-IT \
> <kontakt@taste-of-it.de> написа:  
> 
> 
> 
> 
> Hi,
> 
> i now moving from dead vol1 to new vol2 mounted via nfs. 
> 
> The problem is, that the storage rises and not as expected stay the same. Any idea? \
> I think it has something to do with the .glusterfs direcoties on dead vol1. 
> thx
> 
> Webmaster Taste-of-IT.de<br/><br/>Am 29.08.2021 12:42:18, schrieb Strahil Nikolov:
> > Best case scenario, you just mount via FUSE on the 'dead' node and start copying.
> > Yet, in your case you don't have enough space. I guess you can try on 2 VMs to \
> > simulate the failure, rebuild and then forcefully re-add the old brick. It might \
> > work, it might not ... at least it's worth trying. 
> > Best Regards,Strahil Nikolov
> > 
> > Sent from Yahoo Mail on Android 
> > 
> > On Thu, Aug 26, 2021 at 15:27, Taste-Of-IT<kontakt@taste-of-it.de> wrote:  Hi,
> > what do you mean? Copy the data from dead node to runnig node and than add the \
> > new installed node to existing vol1, after that running rebalance? If so, this is \
> > not possible, because node1 has not enough free space to take all from node2. 
> > thx
> > 
> > Am 22.08.2021 18:35:33, schrieb Strahil Nikolov:
> > > Hi,
> > > 
> > > the best way is to copy the files over the FUSE mount and later add the bricks \
> > > and rebalance. Best Regards,Strahil Nikolov
> > > 
> > > Sent from Yahoo Mail on Android 
> > > 
> > > On Thu, Aug 19, 2021 at 23:04, Taste-Of-IT<kontakt@taste-of-it.de> wrote:  \
> > > Hello, 
> > > i have two nodes with a distributed volume. OS is on a separate disk which \
> > > crashed on one node. However i can reinstall the os and the raid6 which is used \
> > > vor the distributed volume was rebuild. The question now is, how to re-add the \
> > > brick with the volume back to the existing old volume.  
> > > If this is not possible what is with this idea: i create a new vol2 with \
> > > distributed over both nodes and move the files direkt from directory to new \
> > > volume via nfs-ganesha share?! 
> > > thx
> > > ________
> > > 
> > > 
> > > 
> > > Community Meeting Calendar:
> > > 
> > > Schedule -
> > > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > > Bridge: https://meet.google.com/cpu-eiue-hvk
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org
> > > https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> > > 
> > > 
> > ________
> > 
> > 
> > 
> > Community Meeting Calendar:
> > 
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://meet.google.com/cpu-eiue-hvk
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> > 
> > 
> ________
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic