[prev in list] [next in list] [prev in thread] [next in thread] 

List:       hadoop-user
Subject:    Re: Fair scheduler does not share resources of other queues
From:       Gurmukh Singh <gurmukh.dhillon () yahoo ! com ! INVALID>
Date:       2020-04-25 4:26:35
Message-ID: d36320d5-da11-fbf7-3940-da464eb84c5c () yahoo ! com
[Download RAW message or body]

How is the pre-emption configured ?

On 21/4/20 1:41 am, Ilya Karpov wrote:
> Hi, all,
>
> recently I've noticed strange behaviour of YARN Fair Scheduler: 2 jobs 
> (i.e. two simultaneously started oozie launchers) started in a queue 
> with a small weight, and was not able to launch spark jobs while there 
> were plenty resources in other queues.
>
> In details:
> - hadoop(2.6, cdh 5.12) yarn with fair scheduler
> - the queue(say, *small_queue*) with small weight starts 2 oozie 
> launcher jobs
> - oozie launcher jobs occupies all small_queue capacity (even exceed 
> it by 1 core), and both ready to submit spark job (in the same queue = 
> small_queue)
> - there is about 1/4 of free cluster resources in other queues (much 
> more then spark jobs require)
>
> Expected behaviour: free resources of other queues will be given too 
> oozie launchers (from small_queue) to start their spark jobs
> Actual behaviour: spark jobs were never started
>
> Does anybody have an idea what prevented spark jobs from launch?

[Attachment #3 (text/html)]

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>How is the pre-emption configured ?<br>
    </p>
    <div class="moz-cite-prefix">On 21/4/20 1:41 am, Ilya Karpov wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:2332B734-F29B-4D5E-AA77-C7D842F6831E@gmail.com">
      <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
      Hi, all,
      <div class=""><br class="">
      </div>
      <div class="">recently I've noticed strange behaviour of YARN Fair
        Scheduler: 2 jobs (i.e. two simultaneously started oozie
        launchers) started in a queue with a small weight, and was not
        able to launch spark jobs while there were plenty resources in
        other queues.</div>
      <div class=""><br class="">
      </div>
      <div class="">In details:</div>
      <div class="">- hadoop(2.6, cdh 5.12) yarn with fair scheduler</div>
      <div class="">- the queue(say,  <b class="">small_queue</b>) with
        small weight starts 2 oozie launcher jobs</div>
      <div class="">- oozie launcher jobs occupies all small_queue
        capacity (even exceed it by 1 core), and both ready to submit
        spark job (in the same queue = small_queue)</div>
      <div class="">- there is about 1/4 of free cluster resources in
        other queues (much more then spark jobs require)</div>
      <div class=""><br class="">
      </div>
      <div class="">Expected behaviour: free resources of other queues
        will be given too oozie launchers (from small_queue) to start
        their spark jobs</div>
      <div class="">Actual behaviour: spark jobs were never started</div>
      <div class=""><br class="">
      </div>
      <div class="">Does anybody have an idea what prevented spark jobs
        from launch?</div>
    </blockquote>
  </body>
</html>


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic