[prev in list] [next in list] [prev in thread] [next in thread] 

List:       hadoop-user
Subject:    Re: Some Questions about Node Manager Memory Used
From:       Ravi Prakash <ravihadoop () gmail ! com>
Date:       2017-01-24 19:15:17
Message-ID: CAMs9kVgzwDMVvxhcqYmtYNQFGmjvyvgvB0Nq=u2SFbO6wNg77Q () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


Hi Zhuo Chen!

Yarn has a few methods to account for memory. By default, it is
guaranteeing your (hive) application a certain amount of memory. It depends
totally on the application whether it uses all of that memory, or as in
your case, leaves plenty of headroom in case it needs to expand in the
future.

There's plenty of documentation from several vendors on this. I suggest a
search engine query on the lines of "hadoop Yarn memory usage"

HTH
Ravi

On Tue, Jan 24, 2017 at 1:04 AM, Zhuo Chen <ccenuo.dev@gmail.com> wrote:

> My Hive job gets stuck when submitted to the cluster. To view the Resource
> Manager web UI,
> I found the metrics [mem used] have reached approximately the upper limit.
> but when I login into the host, the OS shows memory used is only 13GB by
> run command 'free', and about 46GB were occupied by cache.
>
> ​
>
> so I wonder why there is such inconsistency and how to understand this
> scenario?
> any explanations would be appreciated.
>

[Attachment #5 (text/html)]

<div dir="ltr"><div><div><div><div>Hi Zhuo Chen!<br><br></div>Yarn has a few methods \
to account for memory. By default, it is guaranteeing your (hive) application a \
certain amount of memory. It depends totally on the application whether it uses all \
of that memory, or as in your case, leaves plenty of headroom in case it needs to \
expand in the future.<br><br></div>There&#39;s plenty of documentation from several \
vendors on this. I suggest a search engine query on the lines of &quot;hadoop Yarn \
memory usage&quot;<br><br></div>HTH<br></div>Ravi<br></div><div \
class="gmail_extra"><br><div class="gmail_quote">On Tue, Jan 24, 2017 at 1:04 AM, \
Zhuo Chen <span dir="ltr">&lt;<a href="mailto:ccenuo.dev@gmail.com" \
target="_blank">ccenuo.dev@gmail.com</a>&gt;</span> wrote:<br><blockquote \
class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex"><div dir="ltr">My Hive job gets stuck  when submitted to the \
cluster. To view the Resource Manager web UI,  <div>I found the metrics [mem used] \
have reached approximately the upper limit. but when I login into the host, the OS \
shows memory used is only 13GB by run command &#39;free&#39;, and about 46GB were \
occupied by cache.</div><div><br></div><div><img \
src="cid:ii_iybadbdp0_159cfaf6ddddae9b" width="521" height="70"><img \
src="cid:ii_iybadbe11_159cfaf6ddddae9b" width="160" \
height="156"></div><div>​<br></div><div><br></div><div>so I wonder why there is \
such inconsistency and how to understand this scenario?</div><div>any explanations \
would be appreciated.<br></div></div> </blockquote></div><br></div>

--001a11445d02d448b50546dbef22--


["yarn memory metrics.png" (image/png)]
["node os memory.png" (image/png)]

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic