[prev in list] [next in list] [prev in thread] [next in thread]
List: gluster-users
Subject: Re: [Gluster-users] snapshots questions
From: Strahil <hunter86_bg () yahoo ! com>
Date: 2019-06-27 10:11:36
Message-ID: hpycjx80hbmbmtl6jcokdlq8.1561630296725 () email ! android ! com
[Download RAW message or body]
[Attachment #2 (multipart/alternative)]
[Attachment #4 (text/plain)]
Don't invest too much time.
With Stratis, I expect better reporting/warning to be available.
Yet, that's only an expectation.
Best Regards,
Strahil NikolovOn Jun 27, 2019 13:02, Dmitry Filonov <filonov@hkl.hms.harvard.edu> \
wrote:
>
> Thank you, Strahil -
> I was pointed to --mode=script option that works perfectly for me.
>
> As for snapshots - am spoiled with ZFS that has much better reporting and tools to \
> work with snapshots. Will do some internal monitoring and checks around snapshots. \
> Was hoping am just missing something.
> Thanks,
>
> Fil
>
>
> On Thu, Jun 27, 2019, 1:53 AM Strahil <hunter86_bg@yahoo.com> wrote:
> >
> > If it expects a single word like 'y' or 'yes' , then you can try:
> > echo 'yes' | gluster snapshot delete $(/my/script/to/find/oldest/snapshot)
> >
> > Of course, you should put some logic in order to find the oldest snapshot, bit \
> > that won't be hard as date & time of creation should be in the name.
> > About the situation with the LVM, it is expected that the user takes care of \
> > that, as thin LVs can be overcommitted.
> > For example my arbiter has 20 GB thin LV pool and I have 4 20GB LVs inside that \
> > pool. As long as I don't exhaust the pool's storage - I'm fine.
> >
> > You shouldb't expect that LVM will play the monitoring role here - either put \
> > some kind of monitoring, or create your own solution to monitor that fact.
> > Best Regards,
> > Strahil Nikolov
> >
> > On Jun 26, 2019 15:41, Dmitry Filonov <filonov@hkl.hms.harvard.edu> wrote:
> > >
> > > Hi,
> > > am really new to gluster and have couple question that I hope will be really \
> > > easy to answer. Just couldn't find anything on that myself.
> > > I did set up replica 3 gluster over 3 nodes with 2TB SSD in each node.
> > > To have snapshot functionality I have created thin pool of the size of VG \
> > > (1.82TB) and then 1.75TB thin LVM inside on each of the bricks. It worked just \
> > > fine until I scheduled creating hourly and daily snapshots on that gluster \
> > > volume. In less than 2 days my thin volume got full and crashed. Not refused \
> > > creating new snapshots, but just died as LVM couldn't perform any operations \
> > > there anymore. So my first question is how to prevent this from happening. I \
> > > could create smaller thin LVM, but I still have no control how much space I \
> > > would need for snapshots. I was hoping to see some warnings and errors while \
> > > creeating snapshots, but not failed LVM/Gluster.
> > > The second question is related but not that important. Is there a way to \
> > > schedule snapshot removal in cron? gluster snapshot delete requires interactive \
> > > confirmation and I don't see any flag to auto-confirm snapshot removal.
> > > Thank you,
> > >
> > > Fil
> > >
> > > --
> > > Dmitry Filonov
> > > Linux Administrator
> > > SBGrid Core | Harvard Medical School
> > > 250 Longwood Ave, SGM-114
> > > Boston, MA 02115
[Attachment #5 (text/html)]
<p dir="ltr">Don't invest too much time.</p>
<p dir="ltr">With Stratis, I expect better reporting/warning to be available.<br>
Yet, that's only an expectation.</p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov</p>
<div class="quote">On Jun 27, 2019 13:02, Dmitry Filonov \
<filonov@hkl.hms.harvard.edu> wrote:<br type='attribution'><blockquote \
class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex"><div>Thank you, Strahil -<div> I was pointed to \
--mode=script option that works perfectly for me.</div><div><br /></div><div>As \
for snapshots - am spoiled with ZFS that has much better reporting and tools to work \
with snapshots. Will do some internal monitoring and checks around snapshots. Was \
hoping am just missing something.</div><div><br /></div><div>Thanks,</div><div><br \
/></div><div>Fil</div><div><br /></div></div><br /><div class="elided-text"><div \
dir="ltr">On Thu, Jun 27, 2019, 1:53 AM Strahil <<a \
href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br \
/></div><blockquote style="margin:0 0 0 0.8ex;border-left:1px #ccc \
solid;padding-left:1ex"><p dir="ltr">If it expects a single word like 'y' or \
'yes' , then you can try:<br /> echo 'yes' | gluster snapshot delete \
$(/my/script/to/find/oldest/snapshot)</p> <p dir="ltr">Of course, you should put some \
logic in order to find the oldest snapshot, bit that won't be hard as date & \
time of creation should be in the name.</p> <p dir="ltr">About the situation with the \
LVM, it is expected that the user takes care of that, as thin LVs can be \
overcommitted.</p> <p dir="ltr">For example my arbiter has 20 GB thin LV pool and I \
have 4 20GB LVs inside that pool.<br /> As long as I don't exhaust the pool's \
storage - I'm fine. </p> <p dir="ltr">You shouldb't expect that LVM will play \
the monitoring role here - either put some kind of monitoring, or create your own \
solution to monitor that fact.</p> <p dir="ltr">Best Regards,<br />
Strahil Nikolov</p>
<div>On Jun 26, 2019 15:41, Dmitry Filonov <<a \
href="mailto:filonov@hkl.hms.harvard.edu">filonov@hkl.hms.harvard.edu</a>> \
wrote:<br /><blockquote style="margin:0 0 0 0.8ex;border-left:1px #ccc \
solid;padding-left:1ex"><div dir="ltr"><div>Hi, <br /></div><div> am really new to \
gluster and have couple question that I hope will be really easy to answer. Just \
couldn't find anything on that myself. <br /></div><div><br /></div><div>I did \
set up replica 3 gluster over 3 nodes with 2TB SSD in each node. <br /></div><div>To \
have snapshot functionality I have created thin pool of the size of VG (1.82TB) and \
then 1.75TB thin LVM inside on each of the bricks. <br /></div><div>It worked just \
fine until I scheduled creating hourly and daily snapshots on that gluster volume. In \
less than 2 days my thin volume got full and crashed. <br /></div><div>Not refused \
creating new snapshots, but just died as LVM couldn't perform any operations \
there anymore. <br /></div><div>So my first question is how to prevent this from \
happening. I could create smaller thin LVM, but I still have no control how much \
space I would need for snapshots. I was hoping to see some warnings and errors while \
creeating snapshots, but not failed LVM/Gluster. <br /><br /></div><div>The second \
question is related but not that important. Is there a way to schedule snapshot \
removal in cron? gluster snapshot delete requires interactive confirmation and I \
don't see any flag to auto-confirm snapshot removal. <br /></div><div><br \
/></div><div>Thank you, <br /></div><div><br /></div><div>Fil<br \
/></div><div><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div \
dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><br /></div><div \
dir="ltr">--</div><div dir="ltr">Dmitry Filonov<div>Linux \
Administrator</div><div>SBGrid Core | <span style="font-size:12.8px">Harvard Medical \
School</span></div><div>250 Longwood Ave, SGM-114</div><div>Boston, MA \
02115</div></div></div></div></div></div></div></div></div></div></div></div></div></div>
</blockquote></div></blockquote></div>
</blockquote></div>
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic