[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gluster-users
Subject:    Re: [Gluster-users] Volume Creation - Best Practices
From:       Jim Kinney <jim.kinney () gmail ! com>
Date:       2018-08-25 11:06:45
Message-ID: 8AA0D6E2-0D8A-4791-9701-374F9DAF0600 () gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


I use single disks as a physical volume. Each gluster host is identical. As more \
space is needed for a mount point, a set of disks is added to the logical volume of \
each host. As my primary need is HA, all of my host nodes are simply replicates.

Prior to this config I had a physical raid6 array of 100TB on each host. Lost 3 \
drives on the same array out of just bad luck. 2 drives died and while replacing \
them, the third failed. The subsequent rebuild took months 

By splitting each mount point into separate physical drives, my plan is to lessen \
rebuild time. Rebuilding a failed 24TB chunk that had 3 drives fail should take less \
time than another 100+TB rebuild that slows everyone down. I also added a third host \
node to retain quorum in the event of a failure.

As these are mounts are for independent research groups, they can acquire additional \
storage by simply buying a triplet of drives. To mitigate drive batch failures, we \
buy from different vendors and 2 different brands of drives.

On August 24, 2018 5:45:15 PM EDT, Brian Andrus <toomuchit@gmail.com> wrote:
> You can do that, but you could run into issues with the 'shared' 
> remaining space. Any one of the volumes could eat up the space you 
> planned on using in another volume. Not a huge issue, but could bite
> you.
> 
> I prefer to use ZFS for the flexibility. I create a RAIDZ pool and then
> 
> separate zfs filesystems within that for each brick. I can reserve a 
> specific amount of space in the pool for each brick and that can be 
> modified as well.
> 
> It is easy to grow it too. Plus, configured right, zfs does parallel 
> across all the disks, so you get speedup in performance.
> 
> Brian Andrus
> 
> On 8/24/2018 11:45 AM, Mark Connor wrote:
> > Wondering if there is a best practice for volume creation. I don't
> see 
> > this information in the documentation. For example.
> > I have a 10 node distribute-replicate setup with one large xfs 
> > filesystem mounted on each node.
> > 
> > Is it OK for me to have just one xfs filesystem mounted and use 
> > subdirectories for my bricks for multiple volume creation?
> > So I could have, lets say 10 different volumes but each  using a 
> > brick  as subdir on my single xfs filesystem on each node?
> > In other words multiple bricks on one xfs filesystem per node?
> > I create volumes on the fly and creating new filesystems for each
> node 
> > would be too much work.
> > 
> > Your thoughts?
> > 
> > 
> > 
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users

-- 
Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect \
authenticity.


[Attachment #5 (text/html)]

<html><head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">I use single disks as a physical volume. \
Each gluster host is identical. As more space is needed for a mount point, a set of \
disks is added to the logical volume of each host. As my primary need is HA, all of \
my host nodes are simply replicates.<br> <br>
Prior to this config I had a physical raid6 array of 100TB on each host. Lost 3 \
drives on the same array out of just bad luck. 2 drives died and while replacing \
them, the third failed. The subsequent rebuild took months <br> <br>
By splitting each mount point into separate physical drives, my plan is to lessen \
rebuild time. Rebuilding a failed 24TB chunk that had 3 drives fail should take less \
time than another 100+TB rebuild that slows everyone down. I also added a third host \
node to retain quorum in the event of a failure.<br> <br>
As these are mounts are for independent research groups, they can acquire additional \
storage by simply buying a triplet of drives. To mitigate drive batch failures, we \
buy from different vendors and 2 different brands of drives.<br><br><div \
class="gmail_quote">On August 24, 2018 5:45:15 PM EDT, Brian Andrus \
&lt;toomuchit@gmail.com&gt; wrote:<blockquote class="gmail_quote" style="margin: 0pt \
0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">

    You can do that, but you could run into issues with the 'shared'
    remaining space. Any one of the volumes could eat up the space you
    planned on using in another volume. Not a huge issue, but could bite
    you.<br>
    <p>I prefer to use ZFS for the flexibility. I create a RAIDZ pool
      and then separate zfs filesystems within that for each brick. I
      can reserve a specific amount of space in the pool for each brick
      and that can be modified as well.<br>
    </p>
    <p>It is easy to grow it too. Plus, configured right, zfs does
      parallel across all the disks, so you get speedup in performance.</p>
    <p>Brian Andrus<br>
    </p>
    <div class="moz-cite-prefix">On 8/24/2018 11:45 AM, Mark Connor
      wrote:<br>
    </div>
    <blockquote type="cite" \
cite="mid:CAL6ZFzo5-A6RAGgzAb7snJiZ+7bekUrDjdyYE8N+Pxb_KVkMnQ@mail.gmail.com">  <meta \
http-equiv="content-type" content="text/html; charset=utf-8">  <div dir="ltr">
        <div>Wondering if there is a best practice for volume creation.
          I don't see this information in the documentation. For
          example.</div>
        <div>I have a 10 node distribute-replicate setup with one large
          xfs filesystem mounted on each node.</div>
        <div><br>
        </div>
        <div>Is it OK for me to have just one xfs filesystem mounted and
          use subdirectories for my bricks for multiple volume creation?
        </div>
        <div>So I could have, lets say 10 different volumes but
          each&nbsp;using a brick&nbsp;as subdir on my single xfs filesystem on
          each node? </div>
        <div>In other words multiple bricks on one xfs filesystem per
          node? </div>
        <div>I create volumes on the fly and creating new filesystems
          for each node would be too much work. </div>
        <div><br>
        </div>
        <div>Your thoughts?</div>
        <div><br>
        </div>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" \
href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a> <a \
class="moz-txt-link-freetext" \
href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a></pre>
  </blockquote>
    <br>
  

</blockquote></div><br>
-- <br>
Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect \
authenticity.</body></html>



_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic