[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gluster-devel
Subject:    Re: [Gluster-devel] [Gluster-users] Evergrowing distributed volume question
From:       Strahil Nikolov <hunter86_bg () yahoo ! com>
Date:       2021-03-19 17:44:02
Message-ID: 1393391867.2029030.1616175842049 () mail ! yahoo ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


Yes.
 
 
  On Fri, Mar 19, 2021 at 19:14, Nux!<nux@li.nux.ro> wrote:   So then, in theory my \
plan could work if I always rebalance.

Thanks

On 19 March 2021 17:12:07 GMT, Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
As gluster does not have metadata server, the client:s identify the brick via a \
special algorithm based on the file/dir name. Each brick corresponds to a 'range' of \
hashes , thus when you add a new brick, you always need to rebalance the volume. Best \
Regards,Strahil Nikolov  
 
    Hello,

A while ago I attempted and failed to maintain an "evergrowing" storage 
solution based on GlusterFS.
I was relying on a distributed non-replicated volume to host backups and 
so on, in the idea that when it was close to full I would just add 
another brick (server) and keep it going like that.
In reality what happened was that many of the writes were distributed to 
the brick that was (in time) full, ending up with "out of space" errors, 
despite having one or more bricks with plenty of space.

Can anyone advise whether current Glusterfs behaviour has improved in 
this regard, ie does it check if a brick is full and redirect the 
"write" to one that is not?

Regards,
Lucian
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  


-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.  


[Attachment #5 (text/html)]

Yes.<br> <br> <blockquote style="margin: 0 0 20px 0;"> <div \
style="font-family:Roboto, sans-serif; color:#6D00F6;"> <div>On Fri, Mar 19, 2021 at \
19:14, Nux!</div><div>&lt;nux@li.nux.ro&gt; wrote:</div> </div> <div style="padding: \
10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"> <div \
id="yiv2839069939"><div>So then, in theory my plan could work if I always \
rebalance.<br clear="none"><br clear="none">Thanks<br clear="none"><br \
clear="none"><div class="yiv2839069939yqt2268716624" \
id="yiv2839069939yqtfd43118"><div class="yiv2839069939gmail_quote">On 19 March 2021 \
17:12:07 GMT, Strahil Nikolov &lt;hunter86_bg@yahoo.com&gt; wrote:<blockquote \
class="yiv2839069939gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px \
solid rgb(204, 204, 204);padding-left:1ex;"> As gluster does not have metadata \
server, the client:s identify the brick via a special algorithm based on the file/dir \
name.<div id="yiv2839069939yMail_cursorElementTracker_1616173862631"><br \
clear="none"></div><div \
id="yiv2839069939yMail_cursorElementTracker_1616173863114">Each brick corresponds to \
a 'range' of hashes , thus when you add a new brick, you always need to rebalance the \
volume.</div><div id="yiv2839069939yMail_cursorElementTracker_1616173909131"><br \
clear="none"></div><div \
id="yiv2839069939yMail_cursorElementTracker_1616173909347">Best Regards,</div><div \
id="yiv2839069939yMail_cursorElementTracker_1616173913899">Strahil Nikolov<br \
clear="none"> <br clear="none"> <blockquote style="margin:0 0 20px 0;"> <div \
style="font-family:Roboto, sans-serif;color:#6D00F6;">  </div> <div \
style="padding:10px 0 0 20px;margin:10px 0 0 0;border-left:1px solid #6D00F6;"> <div \
dir="ltr">Hello,<br clear="none"></div><div dir="ltr"><br clear="none"></div><div \
dir="ltr">A while ago I attempted and failed to maintain an "evergrowing" storage <br \
clear="none"></div><div dir="ltr">solution based on GlusterFS.<br \
clear="none"></div><div dir="ltr">I was relying on a distributed non-replicated \
volume to host backups and <br clear="none"></div><div dir="ltr">so on, in the idea \
that when it was close to full I would just add <br clear="none"></div><div \
dir="ltr">another brick (server) and keep it going like that.<br \
clear="none"></div><div dir="ltr">In reality what happened was that many of the \
writes were distributed to <br clear="none"></div><div dir="ltr">the brick that was \
(in time) full, ending up with "out of space" errors, <br clear="none"></div><div \
dir="ltr">despite having one or more bricks with plenty of space.<br \
clear="none"></div><div dir="ltr"><br clear="none"></div><div dir="ltr">Can anyone \
advise whether current Glusterfs behaviour has improved in <br \
clear="none"></div><div dir="ltr">this regard, ie does it check if a brick is full \
and redirect the <br clear="none"></div><div dir="ltr">"write" to one that is not?<br \
clear="none"></div><div dir="ltr"><br clear="none"></div><div dir="ltr">Regards,<br \
clear="none"></div><div dir="ltr">Lucian<br clear="none"></div><div \
dir="ltr">________<br clear="none"></div><div dir="ltr"><br clear="none"></div><div \
dir="ltr"><br clear="none"></div><div dir="ltr"><br clear="none"></div><div \
dir="ltr">Community Meeting Calendar:<br clear="none"></div><div dir="ltr"><br \
clear="none"></div><div dir="ltr">Schedule -<br clear="none"></div><div \
dir="ltr">Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br \
clear="none"></div><div dir="ltr">Bridge: <a rel="nofollow noopener noreferrer" \
shape="rect" target="_blank" \
href="https://meet.google.com/cpu-eiue-hvk">https://meet.google.com/cpu-eiue-hvk</a><br \
clear="none"></div><div dir="ltr">Gluster-users mailing list<br \
clear="none"></div><div dir="ltr"><a rel="nofollow noopener noreferrer" shape="rect" \
ymailto="mailto:Gluster-users@gluster.org" target="_blank" \
href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br \
clear="none"></div><div dir="ltr"><a rel="nofollow noopener noreferrer" shape="rect" \
target="_blank" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br \
clear="none"></div> </div> </blockquote></div></blockquote></div></div><br \
clear="none">-- <br clear="none">Sent from my Android device with K-9 Mail. Please \
excuse my brevity.</div></div> </div> </blockquote>



-------

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic