[prev in list] [next in list] [prev in thread] [next in thread] 

List:       hadoop-user
Subject:    Re: cygwin single node setup
From:       Onder SEZGIN <ondersezgin () gmail ! com>
Date:       2012-04-29 0:19:16
Message-ID: CAE9ARBTT=brXJHQQ+spUnvaG_X9+cbnjo06k11DP16mK6gifHQ () mail ! gmail ! com
[Download RAW message or body]


Hi,

I tried them all.
Finally i could get the datanode up and running.

Thanks Kasi.


But this time, i am getting the following the error.

$ ./bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
12/04/29 03:06:19 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
12/04/29 03:06:19 WARN snappy.LoadSnappy: Snappy native library not loaded
12/04/29 03:06:19 INFO mapred.FileInputFormat: Total input paths to process
: 17
12/04/29 03:06:19 INFO mapred.JobClient: Cleaning up the staging area
hdfs://
127.0.0.1:9000/tmp/mapred/staging/EXT0125622/.staging/job_201204290300_0001
12/04/29 03:06:19 ERROR security.UserGroupInformation:
PriviledgedActionException as:EXT0125622 cause:java.io.IOException: Not a
file: hdfs://127.0.0.1:9000/user/EXT0125622/input/conf
java.io.IOException: Not a file: hdfs://
127.0.0.1:9000/user/EXT0125622/input/conf
        at
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:215)
        at
org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:989)
        at
org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:981)
        at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:897)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
        at
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:824)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1261)
        at org.apache.hadoop.examples.Grep.run(Grep.java:69)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.examples.Grep.main(Grep.java:93)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
        at
org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
        at
org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

And interestingly,

once i do the following, i can see a reasonable output.

$ ./bin/hadoop fs -lsr /
drwxr-xr-x   - EXT0125622 supergroup          0 2012-04-29 02:00 /tmp
drwxr-xr-x   - EXT0125622 supergroup          0 2012-04-29 03:01 /tmp/mapred
drwxr-xr-x   - EXT0125622 supergroup          0 2012-04-29 02:14
/tmp/mapred/staging
drwxr-xr-x   - EXT0125622 supergroup          0 2012-04-29 02:14
/tmp/mapred/staging/EXT0125622
drwx------   - EXT0125622 supergroup          0 2012-04-29 03:06
/tmp/mapred/staging/EXT0125622/.staging
drwx------   - EXT0125622 supergroup          0 2012-04-29 02:14
/tmp/mapred/staging/EXT0125622/.staging/job_201204290211_0001
-rw-r--r--  10 EXT0125622 supergroup     142465 2012-04-29 02:14
/tmp/mapred/staging/EXT0125622/.staging/job_201204290211_0001/job.jar
-rw-r--r--  10 EXT0125622 supergroup       1825 2012-04-29 02:14
/tmp/mapred/staging/EXT0125622/.staging/job_201204290211_0001/job.split
-rw-r--r--   1 EXT0125622 supergroup        657 2012-04-29 02:14
/tmp/mapred/staging/EXT0125622/.staging/job_201204290211_0001/job.splitmetainfo
-rw-r--r--   1 EXT0125622 supergroup      20586 2012-04-29 02:14
/tmp/mapred/staging/EXT0125622/.staging/job_201204290211_0001/job.xml
drwx------   - EXT0125622 supergroup          0 2012-04-29 03:06
/tmp/mapred/staging/EXT0125622/.staging/job_201204290300_0001
-rw-r--r--  10 EXT0125622 supergroup     142465 2012-04-29 03:06
/tmp/mapred/staging/EXT0125622/.staging/job_201204290300_0001/job.jar
drwx------   - EXT0125622 supergroup          0 2012-04-29 03:01
/tmp/mapred/system
-rw-------   1 EXT0125622 supergroup          4 2012-04-29 03:01
/tmp/mapred/system/jobtracker.info
drwxr-xr-x   - EXT0125622 supergroup          0 2012-04-29 02:13 /user
drwxr-xr-x   - EXT0125622 supergroup          0 2012-04-29 03:05
/user/EXT0125622
drwxr-xr-x   - EXT0125622 supergroup          0 2012-04-29 03:05
/user/EXT0125622/conf
drwxr-xr-x   - EXT0125622 supergroup          0 2012-04-29 03:05
/user/EXT0125622/input
-rw-r--r--   1 EXT0125622 supergroup       7457 2012-04-29 02:13
/user/EXT0125622/input/capacity-scheduler.xml
drwxr-xr-x   - EXT0125622 supergroup          0 2012-04-29 03:05
/user/EXT0125622/input/conf
-rw-r--r--   1 EXT0125622 supergroup       7457 2012-04-29 03:05
/user/EXT0125622/input/conf/capacity-scheduler.xml
-rw-r--r--   1 EXT0125622 supergroup        535 2012-04-29 03:05
/user/EXT0125622/input/conf/configuration.xsl
-rw-r--r--   1 EXT0125622 supergroup        438 2012-04-29 03:05
/user/EXT0125622/input/conf/core-site.xml
-rw-r--r--   1 EXT0125622 supergroup        327 2012-04-29 03:05
/user/EXT0125622/input/conf/fair-scheduler.xml
-rw-r--r--   1 EXT0125622 supergroup       2292 2012-04-29 03:05
/user/EXT0125622/input/conf/hadoop-env.sh
-rw-r--r--   1 EXT0125622 supergroup       1488 2012-04-29 03:05
/user/EXT0125622/input/conf/hadoop-metrics2.properties
-rw-r--r--   1 EXT0125622 supergroup       4644 2012-04-29 03:05
/user/EXT0125622/input/conf/hadoop-policy.xml
-rw-r--r--   1 EXT0125622 supergroup        274 2012-04-29 03:05
/user/EXT0125622/input/conf/hdfs-site.xml
-rw-r--r--   1 EXT0125622 supergroup       4441 2012-04-29 03:05
/user/EXT0125622/input/conf/log4j.properties
-rw-r--r--   1 EXT0125622 supergroup       2033 2012-04-29 03:05
/user/EXT0125622/input/conf/mapred-queue-acls.xml
-rw-r--r--   1 EXT0125622 supergroup        290 2012-04-29 03:05
/user/EXT0125622/input/conf/mapred-site.xml
-rw-r--r--   1 EXT0125622 supergroup         10 2012-04-29 03:05
/user/EXT0125622/input/conf/masters
-rw-r--r--   1 EXT0125622 supergroup         10 2012-04-29 03:05
/user/EXT0125622/input/conf/slaves
-rw-r--r--   1 EXT0125622 supergroup       1243 2012-04-29 03:05
/user/EXT0125622/input/conf/ssl-client.xml.example
-rw-r--r--   1 EXT0125622 supergroup       1195 2012-04-29 03:05
/user/EXT0125622/input/conf/ssl-server.xml.example
-rw-r--r--   1 EXT0125622 supergroup        382 2012-04-29 03:05
/user/EXT0125622/input/conf/taskcontroller.cfg
-rw-r--r--   1 EXT0125622 supergroup        535 2012-04-29 02:13
/user/EXT0125622/input/configuration.xsl
-rw-r--r--   1 EXT0125622 supergroup        438 2012-04-29 02:13
/user/EXT0125622/input/core-site.xml
-rw-r--r--   1 EXT0125622 supergroup        327 2012-04-29 02:13
/user/EXT0125622/input/fair-scheduler.xml
-rw-r--r--   1 EXT0125622 supergroup       2292 2012-04-29 02:13
/user/EXT0125622/input/hadoop-env.sh
-rw-r--r--   1 EXT0125622 supergroup       1488 2012-04-29 02:13
/user/EXT0125622/input/hadoop-metrics2.properties
-rw-r--r--   1 EXT0125622 supergroup       4644 2012-04-29 02:13
/user/EXT0125622/input/hadoop-policy.xml
-rw-r--r--   1 EXT0125622 supergroup        274 2012-04-29 02:13
/user/EXT0125622/input/hdfs-site.xml
-rw-r--r--   1 EXT0125622 supergroup       4441 2012-04-29 02:13
/user/EXT0125622/input/log4j.properties
-rw-r--r--   1 EXT0125622 supergroup       2033 2012-04-29 02:13
/user/EXT0125622/input/mapred-queue-acls.xml
-rw-r--r--   1 EXT0125622 supergroup        290 2012-04-29 02:13
/user/EXT0125622/input/mapred-site.xml
-rw-r--r--   1 EXT0125622 supergroup         10 2012-04-29 02:13
/user/EXT0125622/input/masters
-rw-r--r--   1 EXT0125622 supergroup         10 2012-04-29 02:13
/user/EXT0125622/input/slaves
-rw-r--r--   1 EXT0125622 supergroup       1243 2012-04-29 02:13
/user/EXT0125622/input/ssl-client.xml.example
-rw-r--r--   1 EXT0125622 supergroup       1195 2012-04-29 02:13
/user/EXT0125622/input/ssl-server.xml.example
-rw-r--r--   1 EXT0125622 supergroup        382 2012-04-29 02:13
/user/EXT0125622/input/taskcontroller.cfg

any help?


On Sat, Apr 28, 2012 at 3:35 PM, kasi subrahmanyam
<kasisubbu440@gmail.com>wrote:

> Hi Onder,
> You could try to format the namenode and restart the daemons,
> That solved my problem most number of times.
> May be the running daemons where not able to pickup the all the datanodes
> configurations
>
> On Sat, Apr 28, 2012 at 4:23 PM, Onder SEZGIN <ondersezgin@gmail.com>
> wrote:
>
> > Hi,
> >
> > I am pretty a newbie and i am following the quick start guide for single
> > node set up on windows using cygwin.
> >
> > In this step,
> >
> > $ bin/hadoop fs -put conf input
> >
> > I am getting the following errors.
> >
> > I have got no files
> > under /user/EXT0125622/input/conf/capacity-scheduler.xml. That might be a
> > reason for the errors i get but why does hadoop look for such directory
> as
> > i have not configured anything like that. so supposably, hadoop is making
> > up and looking for such file and directory?
> >
> > Any idea and help is welcome.
> >
> > Cheers
> > Onder
> >
> > 12/04/27 13:44:37 WARN hdfs.DFSClient: DataStreamer Exception:
> > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> > /user/EXT0125622/input/conf/capacity-scheduler.xml could only be
> replicated
> > to 0 nodes, instead of 1
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> >        at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> >        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
> >        at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >        at java.lang.reflect.Method.invoke(Method.java:601)
> >        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> >        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> >        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> >        at java.security.AccessController.doPrivileged(Native Method)
> >        at javax.security.auth.Subject.doAs(Subject.java:415)
> >        at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
> >        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> >
> >        at org.apache.hadoop.ipc.Client.call(Client.java:1066)
> >        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> >        at $Proxy1.addBlock(Unknown Source)
> >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >        at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >        at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >        at java.lang.reflect.Method.invoke(Method.java:601)
> >        at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> >        at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> >        at $Proxy1.addBlock(Unknown Source)
> >        at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
> >        at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
> >        at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
> >        at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)
> >
> >  12/04/27 13:44:37 WARN hdfs.DFSClient: Error Recovery for block null bad
> > datanode[0] nodes == null
> > 12/04/27 13:44:37 WARN hdfs.DFSClient: Could not get block locations.
> > Source file "/user/EXT0125622/input/conf/capacity-scheduler.xml" -
> > Aborting...
> > put: java.io.IOException: File
> > /user/EXT0125622/input/conf/capacity-scheduler.xml could only be
> replicated
> > to 0 nodes, instead of 1
> > 12/04/27 13:44:37 ERROR hdfs.DFSClient: Exception closing file
> > /user/EXT0125622/input/conf/capacity-scheduler.xml :
> > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> > /user/EXT0125622/input/conf/capacity-scheduler.xml could only be
> replicated
> > to 0 nodes, instead of 1
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> >        at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> >        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
> >        at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >        at java.lang.reflect.Method.invoke(Method.java:601)
> >        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> >        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> >        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> >        at java.security.AccessController.doPrivileged(Native Method)
> >        at javax.security.auth.Subject.doAs(Subject.java:415)
> >        at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
> >        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> >
> > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> > /user/EXT0125622/input/conf/capacity-scheduler.xml could only be
> replicated
> > to 0 nodes, instead of 1
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> >        at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> >        at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
> >        at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >        at java.lang.reflect.Method.invoke(Method.java:601)
> >        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> >        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> >        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> >        at java.security.AccessController.doPrivileged(Native Method)
> >        at javax.security.auth.Subject.doAs(Subject.java:415)
> >        at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
> >        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> >
> >        at org.apache.hadoop.ipc.Client.call(Client.java:1066)
> >        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> >        at $Proxy1.addBlock(Unknown Source)
> >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >        at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >        at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >        at java.lang.reflect.Method.invoke(Method.java:601)
> >        at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> >        at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> >        at $Proxy1.addBlock(Unknown Source)
> >        at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
> >        at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
> >        at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
> >        at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)
> >
> >
> > --
> > Regards
> > Onder
> >
> >
> >
> >
> >
> > --
> > Regards
> > Onder
> >
>



-- 
Regards
Onder


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic