[prev in list] [next in list] [prev in thread] [next in thread] 

List:       hadoop-commits
Subject:    svn commit: r1477390 [2/2] - /hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/sr
From:       llu () apache ! org
Date:       2013-04-29 22:36:53
Message-ID: 20130429223653.B19E32388A3D () eris ! apache ! org
[Download RAW message or body]


Modified: hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
                
URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-common-projec \
t/hadoop-common/src/main/docs/releasenotes.html?rev=1477390&r1=1477389&r2=1477390&view=diff
 ==============================================================================
--- hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html \
                (original)
+++ hadoop/common/branches/branch-2/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html \
Mon Apr 29 22:36:53 2013 @@ -27,9 +27,9 @@ These release notes include new develope
 <li> <a href="https://issues.apache.org/jira/browse/YARN-357">YARN-357</a>.
      Major bug reported by Daryn Sharp and fixed by Daryn Sharp \
(resourcemanager)<br>  <b>App submission should not be synchronized</b><br>
-     <blockquote>MAPREDUCE-2953 fixed a race condition with querying of app status \
by making {{RMClientService#submitApplication}} synchronously invoke \
{{RMAppManager#submitApplication}}. However, the {{synchronized}} keyword was also \
                added to {{RMAppManager#submitApplication}} with the comment:
-bq. I made the submitApplication synchronized to keep it consistent with the other \
routines in RMAppManager although I do not believe it needs it since the rmapp \
datastructure is already a concurrentMap and I don't see anything else that would be \
                an issue.
-
+     <blockquote>MAPREDUCE-2953 fixed a race condition with querying of app status \
by making {{RMClientService#submitApplication}} synchronously invoke \
{{RMAppManager#submitApplication}}. However, the {{synchronized}} keyword was also \
added to {{RMAppManager#submitApplication}} with the comment: +bq. I made the \
submitApplication synchronized to keep it consistent with the other routines in \
RMAppManager although I do not believe it needs it since the rmapp datastructure is \
already a concurrentMap and I don't see anything else that would be an issue. +
 It's been observed that app submission latency is being unnecessarily \
impacted.</blockquote></li>  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-355">YARN-355</a>.
      Blocker bug reported by Daryn Sharp and fixed by Daryn Sharp \
(resourcemanager)<br> @@ -38,22 +38,22 @@ It's been observed that app submission l
 <li> <a href="https://issues.apache.org/jira/browse/YARN-354">YARN-354</a>.
      Blocker bug reported by Liang Xie and fixed by Liang Xie <br>
      <b>WebAppProxyServer exits immediately after startup</b><br>
-     <blockquote>Please see HDFS-4426 for detail, i found the yarn WebAppProxyServer \
is broken by HADOOP-9181 as well, here's the hot fix, and i verified manually in our \
                test cluster.
-
+     <blockquote>Please see HDFS-4426 for detail, i found the yarn WebAppProxyServer \
is broken by HADOOP-9181 as well, here's the hot fix, and i verified manually in our \
test cluster. +
 I'm really applogized for bring about such trouble...</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-343">YARN-343</a>.
      Major bug reported by Thomas Graves and fixed by Xuan Gong \
(capacityscheduler)<br>  <b>Capacity Scheduler maximum-capacity value -1 is \
                invalid</b><br>
-     <blockquote>I tried to start the resource manager using the capacity scheduler \
with a particular queues maximum-capacity set to -1 which is supposed to disable it \
                according to the docs but I got the following exception:
-
-java.lang.IllegalArgumentException: Illegal value  of maximumCapacity -0.01 used in \
                call to setMaxCapacity for queue foo
-    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueUtils.checkMaxCapacity(CSQueueUtils.java:31)
                
-    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.setupQueueConfigs(LeafQueue.java:220)
                
-    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.&lt;init&gt;(LeafQueue.java:191)
                
-    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.parseQueue(CapacityScheduler.java:310)
                
-    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.parseQueue(CapacityScheduler.java:325)
                
-    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:232)
                
-    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:202)
 +     <blockquote>I tried to start the resource manager using the capacity scheduler \
with a particular queues maximum-capacity set to -1 which is supposed to disable it \
according to the docs but I got the following exception: +
+java.lang.IllegalArgumentException: Illegal value  of maximumCapacity -0.01 used in \
call to setMaxCapacity for queue foo +    at \
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueueUtils.checkMaxCapacity(CSQueueUtils.java:31)
 +    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.setupQueueConfigs(LeafQueue.java:220)
 +    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.&lt;init&gt;(LeafQueue.java:191)
 +    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.parseQueue(CapacityScheduler.java:310)
 +    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.parseQueue(CapacityScheduler.java:325)
 +    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:232)
 +    at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:202)
  </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-336">YARN-336</a>.
      Major bug reported by Sandy Ryza and fixed by Sandy Ryza (scheduler)<br>
@@ -62,57 +62,57 @@ java.lang.IllegalArgumentException: Ille
 <li> <a href="https://issues.apache.org/jira/browse/YARN-334">YARN-334</a>.
      Critical bug reported by Thomas Graves and fixed by Thomas Graves <br>
      <b>Maven RAT plugin is not checking all source files</b><br>
-     <blockquote>yarn side of HADOOP-9097
-
-
-
-Running 'mvn apache-rat:check' passes, but running RAT by hand (by downloading the \
JAR) produces some warnings for Java files, amongst others. +     <blockquote>yarn \
side of HADOOP-9097 +
+
+
+Running 'mvn apache-rat:check' passes, but running RAT by hand (by downloading the \
JAR) produces some warnings for Java files, amongst others.  </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-331">YARN-331</a>.
      Major improvement reported by Sandy Ryza and fixed by Sandy Ryza \
(scheduler)<br>  <b>Fill in missing fair scheduler documentation</b><br>
-     <blockquote>In the fair scheduler documentation, a few config options are \
                missing:
-locality.threshold.node
-locality.threshold.rack
-max.assign
-aclSubmitApps
-minSharePreemptionTimeout
+     <blockquote>In the fair scheduler documentation, a few config options are \
missing: +locality.threshold.node
+locality.threshold.rack
+max.assign
+aclSubmitApps
+minSharePreemptionTimeout
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-330">YARN-330</a>.
      Major bug reported by Hitesh Shah and fixed by Sandy Ryza (nodemanager)<br>
      <b>Flakey test: TestNodeManagerShutdown#testKillContainersOnShutdown</b><br>
-     <blockquote>=Seems to be timing related as the container status RUNNING as \
returned by the ContainerManager does not really indicate that the container task has \
                been launched. Sleep of 5 seconds is not reliable. 
-
-Running org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown
-Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.353 sec \
                &lt;&lt;&lt; FAILURE!
-testKillContainersOnShutdown(org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown) \
                Time elapsed: 9283 sec  &lt;&lt;&lt; FAILURE!
-junit.framework.AssertionFailedError: Did not find sigterm message
-	at junit.framework.Assert.fail(Assert.java:47)
-	at junit.framework.Assert.assertTrue(Assert.java:20)
-	at org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown.testKillContainersOnShutdown(TestNodeManagerShutdown.java:162)
                
-
-Logs:
-
-2013-01-09 14:13:08,401 INFO  [AsyncDispatcher event handler] container.Container \
(ContainerImpl.java:handle(835)) - Container container_0_0000_01_000000 transitioned \
                from NEW to LOCALIZING
-2013-01-09 14:13:08,412 INFO  [AsyncDispatcher event handler] \
localizer.LocalizedResource (LocalizedResource.java:handle(194)) - Resource \
file:hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-serv \
er-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/tmpDir/scriptFile.sh \
                transitioned from INIT to DOWNLOADING
-2013-01-09 14:13:08,412 INFO  [AsyncDispatcher event handler] \
localizer.ResourceLocalizationService (ResourceLocalizationService.java:handle(521)) \
                - Created localizer for container_0_0000_01_000000
-2013-01-09 14:13:08,589 INFO  [LocalizerRunner for container_0_0000_01_000000] \
localizer.ResourceLocalizationService \
(ResourceLocalizationService.java:writeCredentials(895)) - Writing credentials to the \
nmPrivate file hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop \
-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/nm0/nmPrivate/container_0_0000_01_000000.tokens. \
                Credentials list:
-2013-01-09 14:13:08,628 INFO  [LocalizerRunner for container_0_0000_01_000000] \
nodemanager.DefaultContainerExecutor \
                (DefaultContainerExecutor.java:createUserCacheDirs(373)) - \
                Initializing user nobody
-2013-01-09 14:13:08,709 INFO  [main] containermanager.ContainerManagerImpl \
(ContainerManagerImpl.java:getContainerStatus(538)) - Returning container_id {, \
app_attempt_id {, application_id {, id: 0, cluster_timestamp: 0, }, attemptId: 1, }, \
                }, state: C_RUNNING, diagnostics: "", exit_status: -1000,
-2013-01-09 14:13:08,781 INFO  [LocalizerRunner for container_0_0000_01_000000] \
nodemanager.DefaultContainerExecutor \
(DefaultContainerExecutor.java:startLocalizer(99)) - Copying from \
hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-no \
demanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/nm0/nmPrivate/container_0_0000_01_000000.tokens \
to hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server \
-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/ \
                nm0/usercache/nobody/appcache/application_0_0000/container_0_0000_01_000000.tokens
                
-
-
+     <blockquote>=Seems to be timing related as the container status RUNNING as \
returned by the ContainerManager does not really indicate that the container task has \
been launched. Sleep of 5 seconds is not reliable.  +
+Running org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown
+Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.353 sec \
&lt;&lt;&lt; FAILURE! \
+testKillContainersOnShutdown(org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown) \
Time elapsed: 9283 sec  &lt;&lt;&lt; FAILURE! +junit.framework.AssertionFailedError: \
Did not find sigterm message +	at junit.framework.Assert.fail(Assert.java:47)
+	at junit.framework.Assert.assertTrue(Assert.java:20)
+	at org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown.testKillContainersOnShutdown(TestNodeManagerShutdown.java:162)
 +
+Logs:
+
+2013-01-09 14:13:08,401 INFO  [AsyncDispatcher event handler] container.Container \
(ContainerImpl.java:handle(835)) - Container container_0_0000_01_000000 transitioned \
from NEW to LOCALIZING +2013-01-09 14:13:08,412 INFO  [AsyncDispatcher event handler] \
localizer.LocalizedResource (LocalizedResource.java:handle(194)) - Resource \
file:hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-serv \
er-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/tmpDir/scriptFile.sh \
transitioned from INIT to DOWNLOADING +2013-01-09 14:13:08,412 INFO  [AsyncDispatcher \
event handler] localizer.ResourceLocalizationService \
(ResourceLocalizationService.java:handle(521)) - Created localizer for \
container_0_0000_01_000000 +2013-01-09 14:13:08,589 INFO  [LocalizerRunner for \
container_0_0000_01_000000] localizer.ResourceLocalizationService \
(ResourceLocalizationService.java:writeCredentials(895)) - Writing credentials to the \
nmPrivate file hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop \
-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/nm0/nmPrivate/container_0_0000_01_000000.tokens. \
Credentials list: +2013-01-09 14:13:08,628 INFO  [LocalizerRunner for \
container_0_0000_01_000000] nodemanager.DefaultContainerExecutor \
(DefaultContainerExecutor.java:createUserCacheDirs(373)) - Initializing user nobody \
+2013-01-09 14:13:08,709 INFO  [main] containermanager.ContainerManagerImpl \
(ContainerManagerImpl.java:getContainerStatus(538)) - Returning container_id {, \
app_attempt_id {, application_id {, id: 0, cluster_timestamp: 0, }, attemptId: 1, }, \
}, state: C_RUNNING, diagnostics: "", exit_status: -1000, +2013-01-09 14:13:08,781 \
INFO  [LocalizerRunner for container_0_0000_01_000000] \
nodemanager.DefaultContainerExecutor \
(DefaultContainerExecutor.java:startLocalizer(99)) - Copying from \
hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-no \
demanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/nm0/nmPrivate/container_0_0000_01_000000.tokens \
to hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server \
-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/ \
nm0/usercache/nobody/appcache/application_0_0000/container_0_0000_01_000000.tokens +
+
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-328">YARN-328</a>.
      Major improvement reported by Suresh Srinivas and fixed by Suresh Srinivas \
(resourcemanager)<br>  <b>Use token request messages defined in hadoop common \
                </b><br>
-     <blockquote>YARN changes related to HADOOP-9192 to reuse the protobuf messages \
defined in common. +     <blockquote>YARN changes related to HADOOP-9192 to reuse the \
protobuf messages defined in common.  </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-325">YARN-325</a>.
      Blocker bug reported by Jason Lowe and fixed by Arun C Murthy \
                (capacityscheduler)<br>
      <b>RM CapacityScheduler can deadlock when getQueueInfo() is called and a \
                container is completing</b><br>
-     <blockquote>If a client calls getQueueInfo on a parent queue (e.g.: the root \
queue) and containers are completing then the RM can deadlock.  getQueueInfo() locks \
the ParentQueue and then calls the child queues' getQueueInfo() methods in turn.  \
However when a container completes, it locks the LeafQueue then calls back into the \
                ParentQueue.  When the two mix, it's a recipe for deadlock.
-
+     <blockquote>If a client calls getQueueInfo on a parent queue (e.g.: the root \
queue) and containers are completing then the RM can deadlock.  getQueueInfo() locks \
the ParentQueue and then calls the child queues' getQueueInfo() methods in turn.  \
However when a container completes, it locks the LeafQueue then calls back into the \
ParentQueue.  When the two mix, it's a recipe for deadlock. +
 Stacktrace to follow.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-320">YARN-320</a>.
      Blocker bug reported by Daryn Sharp and fixed by Daryn Sharp \
(resourcemanager)<br> @@ -121,7 +121,7 @@ Stacktrace to follow.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-319">YARN-319</a>.
      Major bug reported by shenhong and fixed by shenhong (resourcemanager , \
                scheduler)<br>
      <b>Submit a job to a queue that not allowed in fairScheduler, client will hold \
                forever.</b><br>
-     <blockquote>RM use fairScheduler, when client submit a job to a queue, but the \
queue do not allow the user to submit job it, in this case, client  will hold \
forever. +     <blockquote>RM use fairScheduler, when client submit a job to a queue, \
but the queue do not allow the user to submit job it, in this case, client  will hold \
forever.  </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-315">YARN-315</a>.
      Major improvement reported by Suresh Srinivas and fixed by Suresh Srinivas <br>
@@ -134,20 +134,20 @@ Stacktrace to follow.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-301">YARN-301</a>.
      Major bug reported by shenhong and fixed by shenhong (resourcemanager , \
                scheduler)<br>
      <b>Fair scheduler throws ConcurrentModificationException when iterating over \
                app's priorities</b><br>
-     <blockquote>In my test cluster, fairscheduler appear to \
                concurrentModificationException and RM crash,  here is the message:
-
-2012-12-30 17:14:17,171 FATAL \
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in handling \
                event type NODE_UPDATE to the scheduler
-java.util.ConcurrentModificationException
-        at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1100)
-        at java.util.TreeMap$KeyIterator.next(TreeMap.java:1154)
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable.assignContainer(AppSchedulable.java:297)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:181)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:780)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:842)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:98)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:340)
                
-        at java.lang.Thread.run(Thread.java:662)
-
+     <blockquote>In my test cluster, fairscheduler appear to \
concurrentModificationException and RM crash,  here is the message: +
+2012-12-30 17:14:17,171 FATAL \
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in handling \
event type NODE_UPDATE to the scheduler +java.util.ConcurrentModificationException
+        at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1100)
+        at java.util.TreeMap$KeyIterator.next(TreeMap.java:1154)
+        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable.assignContainer(AppSchedulable.java:297)
 +        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:181)
 +        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:780)
 +        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:842)
 +        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:98)
 +        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:340)
 +        at java.lang.Thread.run(Thread.java:662)
+
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-300">YARN-300</a>.
      Major bug reported by shenhong and fixed by Sandy Ryza (resourcemanager , \
scheduler)<br> @@ -160,8 +160,8 @@ java.util.ConcurrentModificationExceptio
 <li> <a href="https://issues.apache.org/jira/browse/YARN-288">YARN-288</a>.
      Major bug reported by Sandy Ryza and fixed by Sandy Ryza (resourcemanager , \
                scheduler)<br>
      <b>Fair scheduler queue doesn't accept any jobs when ACLs are \
                configured.</b><br>
-     <blockquote>If a queue is configured with an ACL for who can submit jobs, no \
                jobs are allowed, even if a user on the list tries.
-
+     <blockquote>If a queue is configured with an ACL for who can submit jobs, no \
jobs are allowed, even if a user on the list tries. +
 This is caused by using the scheduler thinking the user is "yarn", because it calls \
UserGroupInformation.getCurrentUser() instead of \
UserGroupInformation.createRemoteUser() with the given user name.</blockquote></li>  \
                <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-286">YARN-286</a>.
      Major new feature reported by Tom White and fixed by Tom White \
(applications)<br> @@ -170,12 +170,12 @@ This is caused by using the scheduler th
 <li> <a href="https://issues.apache.org/jira/browse/YARN-285">YARN-285</a>.
      Major improvement reported by Derek Dagit and fixed by Derek Dagit <br>
      <b>RM should be able to provide a tracking link for apps that have already been \
                purged</b><br>
-     <blockquote>As applications complete, the RM tracks their IDs in a completed \
list.  This list is routinely truncated to limit the total number of application \
                remembered by the RM.
-
-When a user clicks the History for a job, either the browser is redirected to the \
application's tracking link obtained from the stored application instance.  But when \
                the application has been purged from the RM, an error is displayed.
-
-In very busy clusters the rate at which applications complete can cause applications \
to be purged from the RM's internal list within hours, which breaks the proxy URLs \
                users have saved for their jobs.
-
+     <blockquote>As applications complete, the RM tracks their IDs in a completed \
list.  This list is routinely truncated to limit the total number of application \
remembered by the RM. +
+When a user clicks the History for a job, either the browser is redirected to the \
application's tracking link obtained from the stored application instance.  But when \
the application has been purged from the RM, an error is displayed. +
+In very busy clusters the rate at which applications complete can cause applications \
to be purged from the RM's internal list within hours, which breaks the proxy URLs \
users have saved for their jobs. +
 We would like the RM to provide valid tracking links persist so that users are not \
frustrated by broken links.</blockquote></li>  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-283">YARN-283</a>.
      Major bug reported by Sandy Ryza and fixed by Sandy Ryza (scheduler)<br>
@@ -200,8 +200,8 @@ We would like the RM to provide valid tr
 <li> <a href="https://issues.apache.org/jira/browse/YARN-272">YARN-272</a>.
      Major bug reported by Sandy Ryza and fixed by Sandy Ryza (scheduler)<br>
      <b>Fair scheduler log messages try to print objects without overridden toString \
                methods</b><br>
-     <blockquote>A lot of junk gets printed out like this:
-
+     <blockquote>A lot of junk gets printed out like this:
+
 2012-12-11 17:31:52,998 INFO \
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerApp: \
Application application_1355270529654_0003 reserved container \
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl@324f0f97 on \
node host: c1416.hal.cloudera.com:46356 #containers=7 available=0 used=8192, \
currently has 4 at priority \
org.apache.hadoop.yarn.api.records.impl.pb.PriorityPBImpl@33; currentReservation \
4096</blockquote></li>  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-271">YARN-271</a>.
      Major bug reported by Sandy Ryza and fixed by Sandy Ryza (resourcemanager , \
scheduler)<br> @@ -218,13 +218,13 @@ We would like the RM to provide valid tr
 <li> <a href="https://issues.apache.org/jira/browse/YARN-264">YARN-264</a>.
      Major bug reported by Karthik Kambatla and fixed by Karthik Kambatla <br>
      <b>y.s.rm.DelegationTokenRenewer attempts to renew token even after removing an \
                app</b><br>
-     <blockquote>yarn.s.rm.security.DelegationTokenRenewer uses TimerTask/Timer. \
When such a timer task is canceled, already scheduled tasks run to completion. The \
task should check for such cancellation before running. Also, delegationTokens needs \
                to be synchronized on all accesses.
-
+     <blockquote>yarn.s.rm.security.DelegationTokenRenewer uses TimerTask/Timer. \
When such a timer task is canceled, already scheduled tasks run to completion. The \
task should check for such cancellation before running. Also, delegationTokens needs \
to be synchronized on all accesses. +
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-258">YARN-258</a>.
      Major bug reported by Ravi Prakash and fixed by Ravi Prakash \
                (resourcemanager)<br>
      <b>RM web page UI shows Invalid Date for start and finish times</b><br>
-     <blockquote>Whenever the number of jobs was greater than a 100, two javascript \
arrays were being populated. appsData and appsTableData. appsData was winning out \
(because it was coming out later) and so renderHadoopDate was trying to render a \
&lt;br title=""...&gt; string. +     <blockquote>Whenever the number of jobs was \
greater than a 100, two javascript arrays were being populated. appsData and \
appsTableData. appsData was winning out (because it was coming out later) and so \
renderHadoopDate was trying to render a &lt;br title=""...&gt; string.  \
</blockquote></li>  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-254">YARN-254</a>.
      Major improvement reported by Sandy Ryza and fixed by Sandy Ryza \
(resourcemanager , scheduler)<br> @@ -241,7 +241,7 @@ We would like the RM to provide \
valid tr  <li> <a href="https://issues.apache.org/jira/browse/YARN-230">YARN-230</a>.
      Major sub-task reported by Bikas Saha and fixed by Bikas Saha \
(resourcemanager)<br>  <b>Make changes for RM restart phase 1</b><br>
-     <blockquote>As described in YARN-128, phase 1 of RM restart puts in place \
mechanisms to save application state and read them back after restart. Upon restart, \
the NM's are asked to reboot and the previously running AM's are restarted. +     \
<blockquote>As described in YARN-128, phase 1 of RM restart puts in place mechanisms \
to save application state and read them back after restart. Upon restart, the NM's \
are asked to reboot and the previously running AM's are restarted.  After this is \
done, RM HA and work preserving restart can continue in parallel. For more details \
please refer to the design document in YARN-128</blockquote></li>  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-229">YARN-229</a>.
      Major sub-task reported by Bikas Saha and fixed by Bikas Saha \
(resourcemanager)<br> @@ -250,34 +250,34 @@ After this is done, RM HA and work prese
 <li> <a href="https://issues.apache.org/jira/browse/YARN-225">YARN-225</a>.
      Critical bug reported by Devaraj K and fixed by Devaraj K (resourcemanager)<br>
      <b>Proxy Link in RM UI thows NPE in Secure mode</b><br>
-     <blockquote>{code:xml}
-java.lang.NullPointerException
-	at org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet.doGet(WebAppProxyServlet.java:241)
                
-	at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
-	at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
-	at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
-	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
                
-	at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
                
-	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
                
-	at org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:975)
                
-	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
                
-	at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
-	at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
-	at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
-	at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
-	at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
-	at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
                
-	at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
-	at org.mortbay.jetty.Server.handle(Server.java:326)
-	at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
-	at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
                
-	at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
-	at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
-	at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
-	at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
-	at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
-
-
+     <blockquote>{code:xml}
+java.lang.NullPointerException
+	at org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet.doGet(WebAppProxyServlet.java:241)
 +	at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
+	at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
+	at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
+	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
 +	at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
 +	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 +	at org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:975)
 +	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 +	at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
+	at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
+	at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
+	at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
+	at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
+	at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
 +	at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
+	at org.mortbay.jetty.Server.handle(Server.java:326)
+	at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
+	at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
 +	at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
+	at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
+	at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
+	at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
+	at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
+
+
 {code}</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-224">YARN-224</a>.
      Major bug reported by Sandy Ryza and fixed by Sandy Ryza <br>
@@ -286,8 +286,8 @@ java.lang.NullPointerException
 <li> <a href="https://issues.apache.org/jira/browse/YARN-223">YARN-223</a>.
      Critical bug reported by Radim Kolar and fixed by Radim Kolar <br>
      <b>Change processTree interface to work better with native code</b><br>
-     <blockquote>Problem is that on every update of processTree new object is \
required. This is undesired when working with processTree implementation in native \
                code.
-
+     <blockquote>Problem is that on every update of processTree new object is \
required. This is undesired when working with processTree implementation in native \
code. +
 replace ProcessTree.getProcessTree() with updateProcessTree(). No new object \
allocation is needed and it simplify application code a bit.</blockquote></li>  <li> \
                <a href="https://issues.apache.org/jira/browse/YARN-222">YARN-222</a>.
                
      Major improvement reported by Sandy Ryza and fixed by Sandy Ryza \
(resourcemanager , scheduler)<br> @@ -308,46 +308,46 @@ replace \
ProcessTree.getProcessTree() wit  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-214">YARN-214</a>.
      Major bug reported by Jason Lowe and fixed by Jonathan Eagles \
                (resourcemanager)<br>
      <b>RMContainerImpl does not handle event EXPIRE at state RUNNING</b><br>
-     <blockquote>RMContainerImpl has a race condition where a container can enter \
the RUNNING state just as the container expires.  This results in an invalid event \
                transition error:
-
-{noformat}
-2012-11-11 05:31:38,954 [ResourceManager Event Processor] ERROR \
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: Can't \
                handle this event at current state
-org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: EXPIRE \
                at RUNNING
-        at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
                
-        at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
                
-        at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl.handle(RMContainerImpl.java:205)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl.handle(RMContainerImpl.java:44)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApp.containerCompleted(SchedulerApp.java:203)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.completedContainer(LeafQueue.java:1337)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.completedContainer(CapacityScheduler.java:739)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:659)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:80)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:340)
                
-        at java.lang.Thread.run(Thread.java:619)
-{noformat}
-
+     <blockquote>RMContainerImpl has a race condition where a container can enter \
the RUNNING state just as the container expires.  This results in an invalid event \
transition error: +
+{noformat}
+2012-11-11 05:31:38,954 [ResourceManager Event Processor] ERROR \
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: Can't \
handle this event at current state \
+org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: EXPIRE \
at RUNNING +        at \
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
 +        at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
 +        at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
 +        at org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl.handle(RMContainerImpl.java:205)
 +        at org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl.handle(RMContainerImpl.java:44)
 +        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApp.containerCompleted(SchedulerApp.java:203)
 +        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.completedContainer(LeafQueue.java:1337)
 +        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.completedContainer(CapacityScheduler.java:739)
 +        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:659)
 +        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:80)
 +        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:340)
 +        at java.lang.Thread.run(Thread.java:619)
+{noformat}
+
 EXPIRE needs to be handled (well at least ignored) in the RUNNING state to account \
for this race condition.</blockquote></li>  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-212">YARN-212</a>.
      Blocker bug reported by Nathan Roberts and fixed by Nathan Roberts \
                (nodemanager)<br>
      <b>NM state machine ignores an APPLICATION_CONTAINER_FINISHED event when it \
                shouldn't</b><br>
-     <blockquote>The NM state machines can make the following two invalid state \
transitions when a speculative attempt is killed shortly after it gets started. When \
this happens the NM keeps the log aggregation context open for this application and \
therefore chews up FDs and leases on the NN, eventually running the NN out of FDs and \
                bringing down the entire cluster.
-
-
-2012-11-07 05:36:33,774 [AsyncDispatcher event handler] WARN \
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: \
                Can't handle this event at current state
-org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: \
                APPLICATION_CONTAINER_FINISHED at INITING
-
-2012-11-07 05:36:33,775 [AsyncDispatcher event handler] WARN \
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Can't \
                handle this event at current state: Current: [DONE], eventType: \
                [INIT_CONTAINER]
-org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: \
                INIT_CONTAINER at DONE
-
-
+     <blockquote>The NM state machines can make the following two invalid state \
transitions when a speculative attempt is killed shortly after it gets started. When \
this happens the NM keeps the log aggregation context open for this application and \
therefore chews up FDs and leases on the NN, eventually running the NN out of FDs and \
bringing down the entire cluster. +
+
+2012-11-07 05:36:33,774 [AsyncDispatcher event handler] WARN \
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: \
Can't handle this event at current state \
+org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: \
APPLICATION_CONTAINER_FINISHED at INITING +
+2012-11-07 05:36:33,775 [AsyncDispatcher event handler] WARN \
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Can't \
handle this event at current state: Current: [DONE], eventType: [INIT_CONTAINER] \
+org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: \
INIT_CONTAINER at DONE +
+
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-206">YARN-206</a>.
      Major bug reported by Jason Lowe and fixed by Jason Lowe (resourcemanager)<br>
      <b>TestApplicationCleanup.testContainerCleanup occasionally fails</b><br>
-     <blockquote>testContainerCleanup is occasionally failing with the error:
-
-testContainerCleanup(org.apache.hadoop.yarn.server.resourcemanager.TestApplicationCleanup): \
expected:&lt;2&gt; but was:&lt;1&gt; +     <blockquote>testContainerCleanup is \
occasionally failing with the error: +
+testContainerCleanup(org.apache.hadoop.yarn.server.resourcemanager.TestApplicationCleanup): \
expected:&lt;2&gt; but was:&lt;1&gt;  </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-204">YARN-204</a>.
      Major bug reported by Aleksey Gorshkov and fixed by Aleksey Gorshkov \
(applications)<br> @@ -356,8 +356,8 @@ testContainerCleanup(org.apache.hadoop.y
 <li> <a href="https://issues.apache.org/jira/browse/YARN-202">YARN-202</a>.
      Critical bug reported by Kihwal Lee and fixed by Kihwal Lee <br>
      <b>Log Aggregation generates a storm of fsync() for namenode</b><br>
-     <blockquote>When the log aggregation is on, write to each aggregated container \
log causes hflush() to be called. For large clusters, this can creates a lot of \
                fsync() calls for namenode. 
-
+     <blockquote>When the log aggregation is on, write to each aggregated container \
log causes hflush() to be called. For large clusters, this can creates a lot of \
fsync() calls for namenode.  +
 We have seen 6-7x increase in the average number of fsync operations compared to \
1.0.x on a large busy cluster. Over 99% of fsync ops were for log aggregation writing \
to tmp files.</blockquote></li>  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-201">YARN-201</a>.
      Critical bug reported by Jason Lowe and fixed by Jason Lowe \
(capacityscheduler)<br> @@ -366,87 +366,87 @@ We have seen 6-7x increase in the \
averag  <li> <a href="https://issues.apache.org/jira/browse/YARN-189">YARN-189</a>.
      Blocker bug reported by Thomas Graves and fixed by Thomas Graves \
(resourcemanager)<br>  <b>deadlock in RM - AMResponse object</b><br>
-     <blockquote>we ran into a deadlock in the RM.
-
-=============================
-"1128743461@qtp-1252749669-5201":
-  waiting for ownable synchronizer 0x00002aabbc87b960, (a \
                java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync),
-  which is held by "AsyncDispatcher event handler"
-"AsyncDispatcher event handler":
-  waiting to lock monitor 0x00002ab0bba3a370 (object 0x00002aab3d4cd698, a \
                org.apache.hadoop.yarn.api.records.impl.pb.AMResponsePBImpl),
-  which is held by "IPC Server handler 36 on 8030"
-"IPC Server handler 36 on 8030":
-  waiting for ownable synchronizer 0x00002aabbc87b960, (a \
                java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync),
-  which is held by "AsyncDispatcher event handler"
-Java stack information for the threads listed above:
-===================================================
-"1128743461@qtp-1252749669-5201":
-        at sun.misc.Unsafe.park(Native Method)
-        - parking to wait for  &lt;0x00002aabbc87b960&gt; (a \
                java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
-        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)        \
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
                
-        at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:941) \
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1261)
                
-        at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:594) \
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.getFinalApplicationStatus(RMAppAttemptImpl.java:2
                
-95)
-        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.getFinalApplicationStatus(RMAppImpl.java:222)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices.getApps(RMWebServices.java:328)
                
-        at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
-        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
                
-        at java.lang.reflect.Method.invoke(Method.java:597)
-        at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
                
-        at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDis \
patchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
                
-        at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaM
                
-...
-...
-..
-  
-
-"AsyncDispatcher event handler":
-        at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.unregisterAttempt(ApplicationMasterService.java:307)
                
-        - waiting to lock &lt;0x00002aab3d4cd698&gt; (a \
                org.apache.hadoop.yarn.api.records.impl.pb.AMResponsePBImpl)
-        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$BaseFinalTransition.transition(RMAppAttemptImpl.java:647)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$FinalTransition.transition(RMAppAttemptImpl.java:809)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$FinalTransition.transition(RMAppAttemptImpl.java:796)
                
-        at org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:357)
                
-        at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:298)
                
-        at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
                
-        at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
                
-        - locked &lt;0x00002aabbb673090&gt; (a \
                org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:478)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:81)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:436)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:417)
                
-        at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
                
-        at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
                
-        at java.lang.Thread.run(Thread.java:619)
-"IPC Server handler 36 on 8030":
-        at sun.misc.Unsafe.park(Native Method)
-        - parking to wait for  &lt;0x00002aabbc87b960&gt; (a \
                java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
-        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
-        at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
                
-        at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842)
                
-        at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178)
                
-        at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:807)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.pullJustFinishedContainers(RMAppAttemptImpl.java:437)
                
-        at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:285)
                
-        - locked &lt;0x00002aab3d4cd698&gt; (a \
                org.apache.hadoop.yarn.api.records.impl.pb.AMResponsePBImpl)
-        at org.apache.hadoop.yarn.api.impl.pb.service.AMRMProtocolPBServiceImpl.allocate(AMRMProtocolPBServiceImpl.java:56)
                
-        at org.apache.hadoop.yarn.proto.AMRMProtocol$AMRMProtocolService$2.callBlockingMethod(AMRMProtocol.java:87)
                
-        at org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Server.call(ProtoOverHadoopRpcEngine.java:353)
                
-        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1528)
-        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1524)
-        at java.security.AccessController.doPrivileged(Native Method)
-        at javax.security.auth.Subject.doAs(Subject.java:396)
-        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1212)
                
-        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1522)
+     <blockquote>we ran into a deadlock in the RM.
+
+=============================
+"1128743461@qtp-1252749669-5201":
+  waiting for ownable synchronizer 0x00002aabbc87b960, (a \
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync), +  which is held by \
"AsyncDispatcher event handler" +"AsyncDispatcher event handler":
+  waiting to lock monitor 0x00002ab0bba3a370 (object 0x00002aab3d4cd698, a \
org.apache.hadoop.yarn.api.records.impl.pb.AMResponsePBImpl), +  which is held by \
"IPC Server handler 36 on 8030" +"IPC Server handler 36 on 8030":
+  waiting for ownable synchronizer 0x00002aabbc87b960, (a \
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync), +  which is held by \
"AsyncDispatcher event handler" +Java stack information for the threads listed above:
+===================================================
+"1128743461@qtp-1252749669-5201":
+        at sun.misc.Unsafe.park(Native Method)
+        - parking to wait for  &lt;0x00002aabbc87b960&gt; (a \
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync) +        at \
java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)        at \
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
 +        at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:941) \
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1261)
 +        at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:594) \
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.getFinalApplicationStatus(RMAppAttemptImpl.java:2
 +95)
+        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.getFinalApplicationStatus(RMAppImpl.java:222)
 +        at org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices.getApps(RMWebServices.java:328)
 +        at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
+        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 +        at java.lang.reflect.Method.invoke(Method.java:597)
+        at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 +        at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDi \
spatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
 +        at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaM
 +...
+...
+..
+  
+
+"AsyncDispatcher event handler":
+        at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.unregisterAttempt(ApplicationMasterService.java:307)
 +        - waiting to lock &lt;0x00002aab3d4cd698&gt; (a \
org.apache.hadoop.yarn.api.records.impl.pb.AMResponsePBImpl) +        at \
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$BaseFinalTransition.transition(RMAppAttemptImpl.java:647)
 +        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$FinalTransition.transition(RMAppAttemptImpl.java:809)
 +        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$FinalTransition.transition(RMAppAttemptImpl.java:796)
 +        at org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:357)
 +        at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:298)
 +        at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
 +        at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
 +        - locked &lt;0x00002aabbb673090&gt; (a \
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine) +        at \
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:478)
 +        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:81)
 +        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:436)
 +        at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:417)
 +        at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
 +        at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
 +        at java.lang.Thread.run(Thread.java:619)
+"IPC Server handler 36 on 8030":
+        at sun.misc.Unsafe.park(Native Method)
+        - parking to wait for  &lt;0x00002aabbc87b960&gt; (a \
java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync) +        at \
java.util.concurrent.locks.LockSupport.park(LockSupport.java:158) +        at \
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
 +        at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:842)
 +        at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1178)
 +        at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:807)
 +        at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.pullJustFinishedContainers(RMAppAttemptImpl.java:437)
 +        at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:285)
 +        - locked &lt;0x00002aab3d4cd698&gt; (a \
org.apache.hadoop.yarn.api.records.impl.pb.AMResponsePBImpl) +        at \
org.apache.hadoop.yarn.api.impl.pb.service.AMRMProtocolPBServiceImpl.allocate(AMRMProtocolPBServiceImpl.java:56)
 +        at org.apache.hadoop.yarn.proto.AMRMProtocol$AMRMProtocolService$2.callBlockingMethod(AMRMProtocol.java:87)
 +        at org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Server.call(ProtoOverHadoopRpcEngine.java:353)
 +        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1528)
+        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1524)
+        at java.security.AccessController.doPrivileged(Native Method)
+        at javax.security.auth.Subject.doAs(Subject.java:396)
+        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1212)
 +        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1522)
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-188">YARN-188</a>.
      Major test reported by Aleksey Gorshkov and fixed by Aleksey Gorshkov \
(capacityscheduler)<br>  <b>Coverage fixing for CapacityScheduler</b><br>
-     <blockquote>some tests for CapacityScheduler
-YARN-188-branch-0.23.patch patch for branch 0.23
-YARN-188-branch-2.patch patch for branch 2
-YARN-188-trunk.patch  patch for trunk
-
+     <blockquote>some tests for CapacityScheduler
+YARN-188-branch-0.23.patch patch for branch 0.23
+YARN-188-branch-2.patch patch for branch 2
+YARN-188-trunk.patch  patch for trunk
+
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-187">YARN-187</a>.
      Major new feature reported by Sandy Ryza and fixed by Sandy Ryza \
(scheduler)<br> @@ -455,10 +455,10 @@ YARN-188-trunk.patch  patch for trunk
 <li> <a href="https://issues.apache.org/jira/browse/YARN-186">YARN-186</a>.
      Major test reported by Aleksey Gorshkov and fixed by Aleksey Gorshkov \
(resourcemanager , scheduler)<br>  <b>Coverage fixing LinuxContainerExecutor</b><br>
-     <blockquote>Added some tests for LinuxContainerExecuror  
-YARN-186-branch-0.23.patch patch for branch-0.23
-YARN-186-branch-2.patch patch for branch-2
-ARN-186-trunk.patch patch for trank
+     <blockquote>Added some tests for LinuxContainerExecuror  
+YARN-186-branch-0.23.patch patch for branch-0.23
+YARN-186-branch-2.patch patch for branch-2
+ARN-186-trunk.patch patch for trank
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-184">YARN-184</a>.
      Major improvement reported by Sandy Ryza and fixed by Sandy Ryza <br>
@@ -475,49 +475,49 @@ ARN-186-trunk.patch patch for trank
 <li> <a href="https://issues.apache.org/jira/browse/YARN-180">YARN-180</a>.
      Critical bug reported by Thomas Graves and fixed by Arun C Murthy \
                (capacityscheduler)<br>
      <b>Capacity scheduler - containers that get reserved create container token to \
                early</b><br>
-     <blockquote>The capacity scheduler has the ability to 'reserve' containers.  \
Unfortunately before it decides that it goes to reserved rather then assigned, the \
Container object is created which creates a container token that expires in roughly \
                10 minutes by default.  
-
+     <blockquote>The capacity scheduler has the ability to 'reserve' containers.  \
Unfortunately before it decides that it goes to reserved rather then assigned, the \
Container object is created which creates a container token that expires in roughly \
10 minutes by default.   +
 This means that by the time the NM frees up enough space on that node for the \
container to move to assigned the container token may have expired.</blockquote></li> \
                <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-179">YARN-179</a>.
      Blocker bug reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar \
Vavilapalli (capacityscheduler)<br>  <b>Bunch of test failures on trunk</b><br>
-     <blockquote>{{CapacityScheduler.setConf()}} mandates a YarnConfiguration. It \
doesn't need to, throughout all of YARN, components only depend on Configuration and \
                depend on the callers to provide correct configuration.
-
+     <blockquote>{{CapacityScheduler.setConf()}} mandates a YarnConfiguration. It \
doesn't need to, throughout all of YARN, components only depend on Configuration and \
depend on the callers to provide correct configuration. +
 This is causing multiple tests to fail.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-178">YARN-178</a>.
      Critical bug reported by Radim Kolar and fixed by Radim Kolar <br>
      <b>Fix custom ProcessTree instance creation</b><br>
-     <blockquote>1. In current pluggable resourcecalculatorprocesstree is not passed \
                root process id to custom implementation making it unusable.
-
-2. pstree do not extend Configured as it should
-
+     <blockquote>1. In current pluggable resourcecalculatorprocesstree is not passed \
root process id to custom implementation making it unusable. +
+2. pstree do not extend Configured as it should
+
 Added constructor with pid argument with testsuite. Also added test that pstree is \
correctly configured.</blockquote></li>  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-177">YARN-177</a>.
      Critical bug reported by Thomas Graves and fixed by Arun C Murthy \
                (capacityscheduler)<br>
      <b>CapacityScheduler - adding a queue while the RM is running has wacky \
                results</b><br>
-     <blockquote>Adding a queue to the capacity scheduler while the RM is running \
and then running a job in the queue added results in very strange behavior.  The \
cluster Total Memory can either decrease or increase.  We had a cluster where total \
memory decreased to almost 1/6th the capacity. Running on a small test cluster \
                resulted in the capacity going up by simply adding a queue and \
                running wordcount.  
-
-Looking at the RM logs, used memory can go negative but other logs show the number \
                positive:
-
-
-2012-10-21 22:56:44,796 [ResourceManager Event Processor] INFO \
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: \
assignedContainer queue=root usedCapacity=0.0375 absoluteUsedCapacity=0.0375 \
                used=memory: 7680 cluster=memory: 204800
-
-2012-10-21 22:56:45,831 [ResourceManager Event Processor] INFO \
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: \
completedContainer queue=root usedCapacity=-0.0225 absoluteUsedCapacity=-0.0225 \
                used=memory: -4608 cluster=memory: 204800
-
+     <blockquote>Adding a queue to the capacity scheduler while the RM is running \
and then running a job in the queue added results in very strange behavior.  The \
cluster Total Memory can either decrease or increase.  We had a cluster where total \
memory decreased to almost 1/6th the capacity. Running on a small test cluster \
resulted in the capacity going up by simply adding a queue and running wordcount.   +
+Looking at the RM logs, used memory can go negative but other logs show the number \
positive: +
+
+2012-10-21 22:56:44,796 [ResourceManager Event Processor] INFO \
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: \
assignedContainer queue=root usedCapacity=0.0375 absoluteUsedCapacity=0.0375 \
used=memory: 7680 cluster=memory: 204800 +
+2012-10-21 22:56:45,831 [ResourceManager Event Processor] INFO \
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: \
completedContainer queue=root usedCapacity=-0.0225 absoluteUsedCapacity=-0.0225 \
used=memory: -4608 cluster=memory: 204800 +
   </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-170">YARN-170</a>.
      Major bug reported by Sandy Ryza and fixed by Sandy Ryza (nodemanager)<br>
      <b>NodeManager stop() gets called twice on shutdown</b><br>
-     <blockquote>The stop method in the NodeManager gets called twice when the \
                NodeManager is shut down via the shutdown hook.
-
-The first is the stop that gets called directly by the shutdown hook.  The second \
occurs when the NodeStatusUpdaterImpl is stopped.  The NodeManager responds to the \
NodeStatusUpdaterImpl stop stateChanged event by stopping itself.  This is so that \
NodeStatusUpdaterImpl can notify the NodeManager to stop, by stopping itself in \
                response to a request from the ResourceManager
-
+     <blockquote>The stop method in the NodeManager gets called twice when the \
NodeManager is shut down via the shutdown hook. +
+The first is the stop that gets called directly by the shutdown hook.  The second \
occurs when the NodeStatusUpdaterImpl is stopped.  The NodeManager responds to the \
NodeStatusUpdaterImpl stop stateChanged event by stopping itself.  This is so that \
NodeStatusUpdaterImpl can notify the NodeManager to stop, by stopping itself in \
response to a request from the ResourceManager +
 This could be avoided if the NodeStatusUpdaterImpl were to stop the NodeManager by \
calling its stop method directly.</blockquote></li>  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-169">YARN-169</a>.
      Minor improvement reported by Anthony Rojas and fixed by Anthony Rojas \
                (nodemanager)<br>
      <b>Update log4j.appender.EventCounter to use \
                org.apache.hadoop.log.metrics.EventCounter</b><br>
-     <blockquote>We should update the log4j.appender.EventCounter in \
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/container-log4j.properties \
to use *org.apache.hadoop.log.metrics.EventCounter* rather than \
*org.apache.hadoop.metrics.jvm.EventCounter* to avoid triggering the following \
                warning:
-
+     <blockquote>We should update the log4j.appender.EventCounter in \
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/container-log4j.properties \
to use *org.apache.hadoop.log.metrics.EventCounter* rather than \
*org.apache.hadoop.metrics.jvm.EventCounter* to avoid triggering the following \
warning: +
 {code}WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use \
org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties \
files{code}</blockquote></li>  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-166">YARN-166</a>.
      Major bug reported by Thomas Graves and fixed by Thomas Graves \
(capacityscheduler)<br> @@ -526,8 +526,8 @@ This could be avoided if the NodeStatusU
 <li> <a href="https://issues.apache.org/jira/browse/YARN-165">YARN-165</a>.
      Blocker improvement reported by Jason Lowe and fixed by Jason Lowe \
                (resourcemanager)<br>
      <b>RM should point tracking URL to RM web page for app when AM fails</b><br>
-     <blockquote>Currently when an ApplicationMaster fails the ResourceManager is \
updating the tracking URL to an empty string, see \
RMAppAttemptImpl.ContainerFinishedTransition.  Unfortunately when the client attempts \
to follow the proxy URL it results in a web page showing an HTTP 500 error and an \
                ugly backtrace because "http://" isn't a very helpful tracking URL.
-
+     <blockquote>Currently when an ApplicationMaster fails the ResourceManager is \
updating the tracking URL to an empty string, see \
RMAppAttemptImpl.ContainerFinishedTransition.  Unfortunately when the client attempts \
to follow the proxy URL it results in a web page showing an HTTP 500 error and an \
ugly backtrace because "http://" isn't a very helpful tracking URL. +
 It would be much more helpful if the proxy URL redirected to the RM webapp page for \
the specific application.  That page shows the various AM attempts and pointers to \
their logs which will be useful for debugging the problems that caused the AM \
attempts to fail.</blockquote></li>  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-163">YARN-163</a>.
      Major bug reported by Jason Lowe and fixed by Jason Lowe (nodemanager)<br>
@@ -540,8 +540,8 @@ It would be much more helpful if the pro
 <li> <a href="https://issues.apache.org/jira/browse/YARN-159">YARN-159</a>.
      Major bug reported by Thomas Graves and fixed by Thomas Graves \
                (resourcemanager)<br>
      <b>RM web ui applications page should be sorted to display last app first \
                </b><br>
-     <blockquote>RM web ui applications page should be sorted to display last app \
                first.
-
+     <blockquote>RM web ui applications page should be sorted to display last app \
first. +
 It currently sorts with smallest application id first, which is the first apps that \
were submitted.  After you have one page worth of apps its much more useful for it to \
sort such that the biggest appid (last submitted app) shows up \
first.</blockquote></li>  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-151">YARN-151</a>.
      Major bug reported by Robert Joseph Evans and fixed by Ravi Prakash <br>
@@ -562,39 +562,39 @@ It currently sorts with smallest applica
 <li> <a href="https://issues.apache.org/jira/browse/YARN-140">YARN-140</a>.
      Major bug reported by Ahmed Radwan and fixed by Ahmed Radwan \
                (capacityscheduler)<br>
      <b>Add capacity-scheduler-default.xml to provide a default set of \
                configurations for the capacity scheduler.</b><br>
-     <blockquote>When setting up the capacity scheduler users are faced with \
                problems like:
-
-{code}
-FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting \
                ResourceManager
-java.lang.IllegalArgumentException: Illegal capacity of -1 for queue root
-{code}
-
-Which basically arises from missing basic configurations, which in many cases, there \
is no need to explicitly provide, and a default configuration will be sufficient. For \
example, to address the error above, the user need to add a capacity of 100 to the \
                root queue.
-
-So, we need to add a capacity-scheduler-default.xml, this will be helpful to provide \
the basic set of default configurations required to run the capacity scheduler. The \
user can still override existing configurations or provide new ones in \
capacity-scheduler.xml. This is similar to *-default.xml vs *-site.xml for yarn, \
                core, mapred, hdfs, etc.
-
+     <blockquote>When setting up the capacity scheduler users are faced with \
problems like: +
+{code}
+FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting \
ResourceManager +java.lang.IllegalArgumentException: Illegal capacity of -1 for queue \
root +{code}
+
+Which basically arises from missing basic configurations, which in many cases, there \
is no need to explicitly provide, and a default configuration will be sufficient. For \
example, to address the error above, the user need to add a capacity of 100 to the \
root queue. +
+So, we need to add a capacity-scheduler-default.xml, this will be helpful to provide \
the basic set of default configurations required to run the capacity scheduler. The \
user can still override existing configurations or provide new ones in \
capacity-scheduler.xml. This is similar to *-default.xml vs *-site.xml for yarn, \
core, mapred, hdfs, etc. +
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-139">YARN-139</a>.
      Major bug reported by Nathan Roberts and fixed by Vinod Kumar Vavilapalli \
                (api)<br>
      <b>Interrupted Exception within AsyncDispatcher leads to user confusion</b><br>
-     <blockquote>Successful applications tend to get InterruptedExceptions during \
shutdown. The exception is harmless but it leads to lots of user confusion and \
                therefore could be cleaned up.
-
-
-2012-09-28 14:50:12,477 WARN [AsyncDispatcher event handler] \
                org.apache.hadoop.yarn.event.AsyncDispatcher: Interrupted Exception \
                while stopping
-java.lang.InterruptedException
-	at java.lang.Object.wait(Native Method)
-	at java.lang.Thread.join(Thread.java:1143)
-	at java.lang.Thread.join(Thread.java:1196)
-	at org.apache.hadoop.yarn.event.AsyncDispatcher.stop(AsyncDispatcher.java:105)
-	at org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
-	at org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
-	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler.handle(MRAppMaster.java:437)
                
-	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler.handle(MRAppMaster.java:402)
                
-	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
-	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
-	at java.lang.Thread.run(Thread.java:619)
-2012-09-28 14:50:12,477 INFO [AsyncDispatcher event handler] \
                org.apache.hadoop.yarn.service.AbstractService: Service:Dispatcher is \
                stopped.
-2012-09-28 14:50:12,477 INFO [AsyncDispatcher event handler] \
org.apache.hadoop.yarn.service.AbstractService: \
Service:org.apache.hadoop.mapreduce.v2.app.MRAppMaster is stopped. +     \
<blockquote>Successful applications tend to get InterruptedExceptions during \
shutdown. The exception is harmless but it leads to lots of user confusion and \
therefore could be cleaned up. +
+
+2012-09-28 14:50:12,477 WARN [AsyncDispatcher event handler] \
org.apache.hadoop.yarn.event.AsyncDispatcher: Interrupted Exception while stopping \
+java.lang.InterruptedException +	at java.lang.Object.wait(Native Method)
+	at java.lang.Thread.join(Thread.java:1143)
+	at java.lang.Thread.join(Thread.java:1196)
+	at org.apache.hadoop.yarn.event.AsyncDispatcher.stop(AsyncDispatcher.java:105)
+	at org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
+	at org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
+	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler.handle(MRAppMaster.java:437)
 +	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler.handle(MRAppMaster.java:402)
 +	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
+	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
+	at java.lang.Thread.run(Thread.java:619)
+2012-09-28 14:50:12,477 INFO [AsyncDispatcher event handler] \
org.apache.hadoop.yarn.service.AbstractService: Service:Dispatcher is stopped. \
+2012-09-28 14:50:12,477 INFO [AsyncDispatcher event handler] \
org.apache.hadoop.yarn.service.AbstractService: \
Service:org.apache.hadoop.mapreduce.v2.app.MRAppMaster is stopped.  2012-09-28 \
14:50:12,477 INFO [AsyncDispatcher event handler] \
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Exiting MR \
AppMaster..GoodBye</blockquote></li>  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-136">YARN-136</a>.
      Major bug reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar \
Vavilapalli (resourcemanager)<br> @@ -603,8 +603,8 @@ java.lang.InterruptedException
 <li> <a href="https://issues.apache.org/jira/browse/YARN-135">YARN-135</a>.
      Major sub-task reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar \
                Vavilapalli (resourcemanager)<br>
      <b>ClientTokens should be per app-attempt and be unregistered on \
                App-finish.</b><br>
-     <blockquote>Two issues:
- - ClientTokens are per app-attempt but are created per app.
+     <blockquote>Two issues:
+ - ClientTokens are per app-attempt but are created per app.
  - Apps don't get unregistered from RMClientTokenSecretManager.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-134">YARN-134</a>.
      Major sub-task reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar \
Vavilapalli <br> @@ -613,30 +613,30 @@ java.lang.InterruptedException
 <li> <a href="https://issues.apache.org/jira/browse/YARN-133">YARN-133</a>.
      Major bug reported by Thomas Graves and fixed by Ravi Prakash \
(resourcemanager)<br>  <b>update web services docs for RM clusterMetrics</b><br>
-     <blockquote>Looks like jira \
https://issues.apache.org/jira/browse/MAPREDUCE-3747 added in more RM cluster metrics \
but the docs didn't get updated: \
http://hadoop.apache.org/docs/r0.23.3/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Metrics_API
                
-
-
+     <blockquote>Looks like jira \
https://issues.apache.org/jira/browse/MAPREDUCE-3747 added in more RM cluster metrics \
but the docs didn't get updated: \
http://hadoop.apache.org/docs/r0.23.3/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Metrics_API
 +
+
 </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-131">YARN-131</a>.
      Major bug reported by Ahmed Radwan and fixed by Ahmed Radwan \
                (capacityscheduler)<br>
      <b>Incorrect ACL properties in capacity scheduler documentation</b><br>
-     <blockquote>The CapacityScheduler apt file incorrectly specifies the property \
                names controlling acls for application submission and queue \
                administration.
-
-{{yarn.scheduler.capacity.root.&lt;queue-path&gt;.acl_submit_jobs}}
-should be
-{{yarn.scheduler.capacity.root.&lt;queue-path&gt;.acl_submit_applications}}
-
-{{yarn.scheduler.capacity.root.&lt;queue-path&gt;.acl_administer_jobs}}
-should be
-{{yarn.scheduler.capacity.root.&lt;queue-path&gt;.acl_administer_queue}}
-
+     <blockquote>The CapacityScheduler apt file incorrectly specifies the property \
names controlling acls for application submission and queue administration. +
+{{yarn.scheduler.capacity.root.&lt;queue-path&gt;.acl_submit_jobs}}
+should be
+{{yarn.scheduler.capacity.root.&lt;queue-path&gt;.acl_submit_applications}}
+
+{{yarn.scheduler.capacity.root.&lt;queue-path&gt;.acl_administer_jobs}}
+should be
+{{yarn.scheduler.capacity.root.&lt;queue-path&gt;.acl_administer_queue}}
+
 Uploading a patch momentarily.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-129">YARN-129</a>.
      Major improvement reported by Tom White and fixed by Tom White (client)<br>
      <b>Simplify classpath construction for mini YARN tests</b><br>
-     <blockquote>The test classpath includes a special file called \
'mrapp-generated-classpath' (or similar in distributed shell) that is constructed at \
build time, and whose contents are a classpath with all the dependencies needed to \
run the tests. When the classpath for a container (e.g. the AM) is constructed the \
contents of mrapp-generated-classpath is read and added to the classpath, and the \
file itself is then added to the classpath so that later when the AM constructs a \
                classpath for a task container it can propagate the test classpath \
                correctly.
-
-This mechanism can be drastically simplified by propagating the system classpath of \
the current JVM (read from the java.class.path property) to a launched JVM, but only \
if running in the context of the mini YARN cluster. Any tests that use the mini YARN \
cluster will automatically work with this change. Although any that explicitly deal \
with mrapp-generated-classpath can be simplified. +     <blockquote>The test \
classpath includes a special file called 'mrapp-generated-classpath' (or similar in \
distributed shell) that is constructed at build time, and whose contents are a \
classpath with all the dependencies needed to run the tests. When the classpath for a \
container (e.g. the AM) is constructed the contents of mrapp-generated-classpath is \
read and added to the classpath, and the file itself is then added to the classpath \
so that later when the AM constructs a classpath for a task container it can \
propagate the test classpath correctly. +
+This mechanism can be drastically simplified by propagating the system classpath of \
the current JVM (read from the java.class.path property) to a launched JVM, but only \
if running in the context of the mini YARN cluster. Any tests that use the mini YARN \
cluster will automatically work with this change. Although any that explicitly deal \
with mrapp-generated-classpath can be simplified.  </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-127">YARN-127</a>.
      Major bug reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar \
Vavilapalli <br> @@ -645,25 +645,25 @@ This mechanism can be drastically simpli
 <li> <a href="https://issues.apache.org/jira/browse/YARN-116">YARN-116</a>.
      Major bug reported by xieguiming and fixed by xieguiming (resourcemanager)<br>
      <b>RM is missing ability to add include/exclude files without a restart</b><br>
-     <blockquote>The "yarn.resourcemanager.nodes.include-path" default value is "", \
                if we need to add an include file, we must currently restart the RM. 
-
-I suggest that for adding an include or exclude file, there should be no need to \
restart the RM. We may only execute the refresh command. The HDFS NameNode already \
                has this ability.
-
-Fix is to the modify HostsFileReader class instances:
-
-From:
-{code}
-public HostsFileReader(String inFile, 
-                         String exFile)
-{code}
-To:
-{code}
- public HostsFileReader(Configuration conf, 
-                         String NODES_INCLUDE_FILE_PATH,String \
                DEFAULT_NODES_INCLUDE_FILE_PATH,
-                        String NODES_EXCLUDE_FILE_PATH,String \
                DEFAULT_NODES_EXCLUDE_FILE_PATH)
-{code}
-
-And thus, we can read the config file dynamically when a {{refreshNodes}} is invoked \
and therefore have no need to restart the ResourceManager. +     <blockquote>The \
"yarn.resourcemanager.nodes.include-path" default value is "", if we need to add an \
include file, we must currently restart the RM.  +
+I suggest that for adding an include or exclude file, there should be no need to \
restart the RM. We may only execute the refresh command. The HDFS NameNode already \
has this ability. +
+Fix is to the modify HostsFileReader class instances:
+
+From:
+{code}
+public HostsFileReader(String inFile, 
+                         String exFile)
+{code}
+To:
+{code}
+ public HostsFileReader(Configuration conf, 
+                         String NODES_INCLUDE_FILE_PATH,String \
DEFAULT_NODES_INCLUDE_FILE_PATH, +                        String \
NODES_EXCLUDE_FILE_PATH,String DEFAULT_NODES_EXCLUDE_FILE_PATH) +{code}
+
+And thus, we can read the config file dynamically when a {{refreshNodes}} is invoked \
and therefore have no need to restart the ResourceManager.  </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-103">YARN-103</a>.
      Major improvement reported by Bikas Saha and fixed by Bikas Saha <br>
@@ -676,10 +676,10 @@ And thus, we can read the config file dy
 <li> <a href="https://issues.apache.org/jira/browse/YARN-94">YARN-94</a>.
      Major bug reported by Vinod Kumar Vavilapalli and fixed by Hitesh Shah \
                (applications/distributed-shell)<br>
      <b>DistributedShell jar should point to Client as the main class by \
                default</b><br>
-     <blockquote>Today, it says so..
-{code}
-$ $YARN_HOME/bin/yarn jar \
                $YARN_HOME/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-$VERSION.jar
                
-RunJar jarFile [mainClass] args...
+     <blockquote>Today, it says so..
+{code}
+$ $YARN_HOME/bin/yarn jar \
$YARN_HOME/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-$VERSION.jar \
+RunJar jarFile [mainClass] args...  {code}</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-93">YARN-93</a>.
      Major bug reported by Jason Lowe and fixed by Jason Lowe (resourcemanager)<br>
@@ -688,8 +688,8 @@ RunJar jarFile [mainClass] args...
 <li> <a href="https://issues.apache.org/jira/browse/YARN-82">YARN-82</a>.
      Minor bug reported by Andy Isaacson and fixed by Hemanth Yamijala \
(nodemanager)<br>  <b>YARN local-dirs defaults to /tmp/nm-local-dir</b><br>
-     <blockquote>{{yarn.nodemanager.local-dirs}} defaults to {{/tmp/nm-local-dir}}.  \
It should be {hadoop.tmp.dir}/nm-local-dir or similar.  Among other problems, this \
                can prevent multiple test clusters from starting on the same machine.
-
+     <blockquote>{{yarn.nodemanager.local-dirs}} defaults to {{/tmp/nm-local-dir}}.  \
It should be {hadoop.tmp.dir}/nm-local-dir or similar.  Among other problems, this \
can prevent multiple test clusters from starting on the same machine. +
 Thanks to Hemanth Yamijala for reporting this issue.</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-78">YARN-78</a>.
      Major bug reported by Bikas Saha and fixed by Bikas Saha (applications)<br>
@@ -698,8 +698,8 @@ Thanks to Hemanth Yamijala for reporting
 <li> <a href="https://issues.apache.org/jira/browse/YARN-72">YARN-72</a>.
      Major bug reported by Hitesh Shah and fixed by Sandy Ryza (nodemanager)<br>
      <b>NM should handle cleaning up containers when it shuts down</b><br>
-     <blockquote>Ideally, the NM should wait for a limited amount of time when it \
gets a shutdown signal for existing containers to complete and kill the containers ( \
                if we pick an aggressive approach ) after this time interval. 
-
+     <blockquote>Ideally, the NM should wait for a limited amount of time when it \
gets a shutdown signal for existing containers to complete and kill the containers ( \
if we pick an aggressive approach ) after this time interval.  +
 For NMs which come up after an unclean shutdown, the NM should look through its \
directories for existing container.pids and try and kill an existing containers \
matching the pids found. </blockquote></li>  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-57">YARN-57</a>.
      Major improvement reported by Radim Kolar and fixed by Radim Kolar \
(nodemanager)<br> @@ -712,29 +712,29 @@ For NMs which come up after an unclean s
 <li> <a href="https://issues.apache.org/jira/browse/YARN-43">YARN-43</a>.
      Major bug reported by Thomas Graves and fixed by Thomas Graves <br>
      <b>TestResourceTrackerService fail intermittently on jdk7</b><br>
-     <blockquote>Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: \
                1.73 sec &lt;&lt;&lt; FAILURE!
-testDecommissionWithIncludeHosts(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService) \
                Time elapsed: 0.086 sec  &lt;&lt;&lt; FAILURE!
-junit.framework.AssertionFailedError: expected:&lt;0&gt; but was:&lt;1&gt;        at \
                junit.framework.Assert.fail(Assert.java:47)
-        at junit.framework.Assert.failNotEquals(Assert.java:283)
-        at junit.framework.Assert.assertEquals(Assert.java:64)
-        at junit.framework.Assert.assertEquals(Assert.java:195)
-        at junit.framework.Assert.assertEquals(Assert.java:201)
+     <blockquote>Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: \
1.73 sec &lt;&lt;&lt; FAILURE! \
+testDecommissionWithIncludeHosts(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService) \
Time elapsed: 0.086 sec  &lt;&lt;&lt; FAILURE! +junit.framework.AssertionFailedError: \
expected:&lt;0&gt; but was:&lt;1&gt;        at \
junit.framework.Assert.fail(Assert.java:47) +        at \
junit.framework.Assert.failNotEquals(Assert.java:283) +        at \
junit.framework.Assert.assertEquals(Assert.java:64) +        at \
junit.framework.Assert.assertEquals(Assert.java:195) +        at \
junit.framework.Assert.assertEquals(Assert.java:201)  at \
org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService.testDecommissionWithIncludeHosts(TestResourceTrackerService.java:90)</blockquote></li>
  <li> <a href="https://issues.apache.org/jira/browse/YARN-40">YARN-40</a>.
      Major bug reported by Devaraj K and fixed by Devaraj K (client)<br>
      <b>Provide support for missing yarn commands</b><br>
-     <blockquote>1. status &lt;app-id&gt;
-2. kill &lt;app-id&gt; (Already issue present with Id : MAPREDUCE-3793)
-3. list-apps [all]
+     <blockquote>1. status &lt;app-id&gt;
+2. kill &lt;app-id&gt; (Already issue present with Id : MAPREDUCE-3793)
+3. list-apps [all]
 4. nodes-report</blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-33">YARN-33</a>.
      Major bug reported by Mayank Bansal and fixed by Mayank Bansal \
                (nodemanager)<br>
      <b>LocalDirsHandler should validate the configured local and log dirs</b><br>
-     <blockquote>WHen yarn.nodemanager.log-dirs is with file:// URI then startup of \
                node manager creates the directory like file:// under CWD.
-
-WHich should not be there.
-
-Thanks,
+     <blockquote>WHen yarn.nodemanager.log-dirs is with file:// URI then startup of \
node manager creates the directory like file:// under CWD. +
+WHich should not be there.
+
+Thanks,
 Mayank </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-32">YARN-32</a>.
      Major bug reported by Thomas Graves and fixed by Vinod Kumar Vavilapalli <br>
@@ -743,23 +743,23 @@ Mayank </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/YARN-30">YARN-30</a>.
      Major bug reported by Thomas Graves and fixed by Thomas Graves <br>
      <b>TestNMWebServicesApps, TestRMWebServicesApps and TestRMWebServicesNodes fail \
                on jdk7</b><br>
-     <blockquote>It looks like the string changed from "const class" to "constant". 
-
-
-Tests run: 19, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 6.786 sec \
                &lt;&lt;&lt; FAILURE!
-testNodeAppsStateInvalid(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServicesApps) \
Time elapsed: 0.248 sec  &lt;&lt;&lt; FAILURE! +     <blockquote>It looks like the \
string changed from "const class" to "constant".  +
+
+Tests run: 19, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 6.786 sec \
&lt;&lt;&lt; FAILURE! \
+testNodeAppsStateInvalid(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServicesApps) \
Time elapsed: 0.248 sec  &lt;&lt;&lt; FAILURE!  java.lang.AssertionError: exception \
message doesn't match, got: No enum constant \
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationState.FOO_STATE \
expected: No enum const class \
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationState.FOO_STATE</blockquote></li>
  <li> <a href="https://issues.apache.org/jira/browse/YARN-28">YARN-28</a>.
      Major bug reported by Thomas Graves and fixed by Thomas Graves <br>
      <b>TestCompositeService fails on jdk7</b><br>
-     <blockquote>test TestCompositeService fails when run with jdk7.
-
+     <blockquote>test TestCompositeService fails when run with jdk7.
+
 It appears it expects test testCallSequence to be called first and the sequence \
numbers to start at 0. On jdk7 its not being called first and sequence number has \
already been incremented.</blockquote></li>  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-23">YARN-23</a>.
      Major improvement reported by Karthik Kambatla and fixed by Karthik Kambatla \
                (scheduler)<br>
      <b>FairScheduler: FSQueueSchedulable#updateDemand() - potential redundant \
                aggregation</b><br>
-     <blockquote>In FS, FSQueueSchedulable#updateDemand() limits the demand to \
                maxTasks only after iterating though all the pools and computing the \
                final demand. 
-
+     <blockquote>In FS, FSQueueSchedulable#updateDemand() limits the demand to \
maxTasks only after iterating though all the pools and computing the final demand.  +
 By checking if the demand has reached maxTasks in every iteration, we can avoid \
redundant work, at the expense of one condition check every \
iteration.</blockquote></li>  <li> <a \
                href="https://issues.apache.org/jira/browse/YARN-3">YARN-3</a>.
      Major sub-task reported by Arun C Murthy and fixed by Andrew Ferguson <br>
@@ -808,7 +808,7 @@ By checking if the demand has reached ma
 <li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4928">MAPREDUCE-4928</a>.
                
      Major improvement reported by Suresh Srinivas and fixed by Suresh Srinivas \
(applicationmaster , security)<br>  <b>Use token request messages defined in hadoop \
                common </b><br>
-     <blockquote>Protobuf message GetDelegationTokenRequestProto field renewer is \
made requried from optional. This change is not wire compatible with the older \
releases. +     <blockquote>Protobuf message GetDelegationTokenRequestProto field \
renewer is made requried from optional. This change is not wire compatible with the \
older releases.  </blockquote></li>
 <li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4925">MAPREDUCE-4925</a>.
                
      Major bug reported by Karthik Kambatla and fixed by Karthik Kambatla \
(examples)<br> @@ -1269,7 +1269,7 @@ By checking if the demand has reached ma
 <li> <a href="https://issues.apache.org/jira/browse/HDFS-4369">HDFS-4369</a>.
      Blocker bug reported by Suresh Srinivas and fixed by Suresh Srinivas \
(namenode)<br>  <b>GetBlockKeysResponseProto does not handle null response</b><br>
-     <blockquote>Protobuf message GetBlockKeysResponseProto member keys is made \
optional from required so that null values can be passed over the wire. This is an \
incompatible wire protocol change and does not affect the API backward compatibility. \
+     <blockquote>Protobuf message GetBlockKeysResponseProto member keys is made \
optional from required so that null values can be passed over the wire. This is an \
incompatible wire protocol change and does not affect the API backward compatibility. \
</blockquote></li>  <li> <a \
href="https://issues.apache.org/jira/browse/HDFS-4367">HDFS-4367</a>.

[... 56 lines stripped ...]


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic