[prev in list] [next in list] [prev in thread] [next in thread]
List: hadoop-commits
Subject: svn commit: r1343738 - in /hadoop/common/branches/branch-1: CHANGES.txt src/hdfs/org/apache/hadoop/h
From: suresh () apache ! org
Date: 2012-05-29 14:17:56
Message-ID: 20120529141756.9E97F238899C () eris ! apache ! org
[Download RAW message or body]
Author: suresh
Date: Tue May 29 14:17:56 2012
New Revision: 1343738
URL: http://svn.apache.org/viewvc?rev=1343738&view=rev
Log:
HDFS-3453. HDFS 1.x client is not interoperable with pre 1.x server. Contributed by \
Kihwal Lee.
Modified:
hadoop/common/branches/branch-1/CHANGES.txt
hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/DFSClient.java
Modified: hadoop/common/branches/branch-1/CHANGES.txt
URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/CHANGES.txt?rev=1343738&r1=1343737&r2=1343738&view=diff
==============================================================================
--- hadoop/common/branches/branch-1/CHANGES.txt (original)
+++ hadoop/common/branches/branch-1/CHANGES.txt Tue May 29 14:17:56 2012
@@ -249,6 +249,9 @@ Release 1.1.0 - unreleased
HADOOP-8329. Build fails with Java 7. (eli)
+ HDFS-3453. HDFS 1.x client is not interoperable with pre 1.x server.
+ (Kihwal Lee via suresh)
+
Release 1.0.3 - 2012.05.07
NEW FEATURES
Modified: hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/DFSClient.java
URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/DFSClient.java?rev=1343738&r1=1343737&r2=1343738&view=diff
==============================================================================
--- hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/DFSClient.java \
(original)
+++ hadoop/common/branches/branch-1/src/hdfs/org/apache/hadoop/hdfs/DFSClient.java \
Tue May 29 14:17:56 2012 @@ -3342,8 +3342,17 @@ public class DFSClient implements \
FSCons computePacketChunkSize(writePacketSize, bytesPerChecksum);
try {
- namenode.create(
- src, masked, clientName, overwrite, createParent, replication, \
blockSize); + // Make sure the regular create() is done through the old \
create(). + // This is done to ensure that newer clients (post-1.0) can talk \
to + // older clusters (pre-1.0). Older clusters lack the new create()
+ // method accepting createParent as one of the arguments.
+ if (createParent) {
+ namenode.create(
+ src, masked, clientName, overwrite, replication, blockSize);
+ } else {
+ namenode.create(
+ src, masked, clientName, overwrite, false, replication, blockSize);
+ }
} catch(RemoteException re) {
throw re.unwrapRemoteException(AccessControlException.class,
FileAlreadyExistsException.class,
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic