[prev in list] [next in list] [prev in thread] [next in thread] 

List:       cassandra-user
Subject:    Re: High memory usage during nodetool repair
From:       Amandeep Srivastava <amandeep.srivastava1996 () gmail ! com>
Date:       2021-07-29 13:29:55
Message-ID: CABrAqH-tgbwCh5REkArmZ5nhfXz_bAUOLqUFSGm0CTOSRJ=GRQ () mail ! gmail ! com
[Download RAW message or body]

Hi Erick,

Limiting mmap to index only seems to have resolved the issue. The max ram
usage remained at 60% this time. Could you please point me to the
limitations for setting this param? - For starters, I can see read
performance getting reduced up to 30% (CASSANDRA-8464
<https://issues.apache.org/jira/browse/CASSANDRA-8464>)

Also if you could please shed light on extended questions in my earlier
email.

Thanks a lot.

Regards,
Aman

On Thu, Jul 29, 2021 at 12:52 PM Amandeep Srivastava <
amandeep.srivastava1996@gmail.com> wrote:

> Thanks, Bowen, don't think that's an issue - but yes I can try upgrading
> to 3.11.5 and limit the merkle tree size to bring down the memory
> utilization.
>
> Thanks, Erick, let me try that.
>
> Can someone please share documentation relating to internal functioning of
> full repairs - if there exists one? Wanted to understand the role of the
> heap and off-heap memory separately during the process.
>
> Also, for my case, once the nodes reach the 95% memory usage, it stays
> there for almost 10-12 hours after the repair is complete, before falling
> back to 65%. Any pointers on what might be consuming off-heap for so long
> and can something be done to clear it earlier?
>
> Thanks,
> Aman
>
>
>

-- 
Regards,
Aman

[Attachment #3 (text/html)]

<div dir="ltr">Hi Erick,<div><br></div><div>Limiting mmap to index only seems to have \
resolved the issue. The max ram usage remained at 60% this time. Could you please \
point me to the limitations for setting this param? - For starters, I can see read \
performance getting reduced up to 30% (<a \
href="https://issues.apache.org/jira/browse/CASSANDRA-8464">CASSANDRA-8464</a>)</div><div><br></div><div>Also \
if you could please shed light on extended questions in my earlier \
email.</div><div><br></div><div>Thanks a \
lot.</div><div><br>Regards,<br>Aman</div></div><br><div class="gmail_quote"><div \
dir="ltr" class="gmail_attr">On Thu, Jul 29, 2021 at 12:52 PM Amandeep Srivastava \
&lt;<a href="mailto:amandeep.srivastava1996@gmail.com">amandeep.srivastava1996@gmail.com</a>&gt; \
wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div \
dir="ltr">Thanks, Bowen, don&#39;t think that&#39;s an issue - but yes I can try \
upgrading to 3.11.5 and limit the merkle tree size to bring down the memory \
utilization.<div><br></div><div>Thanks, Erick, let me try \
that.</div><div><br></div><div>Can someone please share documentation relating to \
internal functioning of full repairs - if there exists one? Wanted to understand the \
role of the heap and off-heap memory separately during the \
process.</div><div><br></div><div>Also, for my case, once the nodes  reach the 95% \
memory usage, it stays there for almost 10-12 hours after the repair is complete, \
before falling back to 65%. Any pointers on what might be consuming off-heap  for so \
long and can something be done to clear it \
earlier?</div><div><br></div><div>Thanks,<br>Aman</div></div><br \
clear="all"><div><br></div> </div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" \
class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div \
dir="ltr"><div><div>Regards,</div><div \
dir="ltr">Aman</div></div></div></div></div></div></div></div>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic