[prev in list] [next in list] [prev in thread] [next in thread] 

List:       hadoop-user
Subject:    Re: Can not upload local file to HDFS
From:       He Chen <airbots () gmail ! com>
Date:       2010-09-28 20:07:28
Message-ID: AANLkTimQ+dYghHyrucvnBUY1kvqpTca3pNZ=u11FWmsS () mail ! gmail ! com
[Download RAW message or body]


I found the problem. It is because the system disk error. Then the whole "/=
"
directory became read-only. When I copyFromLocal, it will use local /tmp
directory as buffer. However, Hadoop does not know it is read-only. That is
why it reported datanode problem.

On Mon, Sep 27, 2010 at 10:34 AM, He Chen <airbots@gmail.com> wrote:

> Thanks, but I think you goes too far to focus on the problem itself.
>
>
> On Sun, Sep 26, 2010 at 11:43 AM, Nan Zhu <zhunansjtu@gmail.com> wrote:
>
>> Have you ever check the log file in the directory?
>>
>> I always find some important information there,
>>
>> I suggest you to recompile hadoop with ant since mapred daemons also don=
't
>> work
>>
>> Nan
>>
>> On Sun, Sep 26, 2010 at 7:29 PM, He Chen <airbots@gmail.com> wrote:
>>
>> > The problem is every datanode may be listed in the error report. That
>> means
>> > all my datanodes are bad?
>> >
>> > One thing I forgot to mention. I can not use start-all.sh and
>> stop-all.sh
>> > to
>> > start and stop all dfs and mapred processes on my clusters. But the
>> > jobtracker and namenode web interface still work.
>> >
>> > I think I can solve this problem by ssh to every node and kill current
>> > hadoop processes and restart them again. The previous problem will als=
o
>> be
>> > solved( it's my opinion). But I really want to know why the HDFS repor=
ts
>> me
>> > previous errors.
>> >
>> >
>> > On Sat, Sep 25, 2010 at 11:20 PM, Nan Zhu <zhunansjtu@gmail.com> wrote=
:
>> >
>> > > Hi Chen,
>> > >
>> > > It seems that you have a bad datanode? maybe you should reformat the=
m?
>> > >
>> > > Nan
>> > >
>> > > On Sun, Sep 26, 2010 at 10:42 AM, He Chen <airbots@gmail.com> wrote:
>> > >
>> > > > Hello Neil
>> > > >
>> > > > No matter how big the file is. It always report this to me. The fi=
le
>> > size
>> > > > is
>> > > > from 10KB to 100MB.
>> > > >
>> > > > On Sat, Sep 25, 2010 at 6:08 PM, Neil Ghosh <neil.ghosh@gmail.com>
>> > > wrote:
>> > > >
>> > > > > How Big is the file? Did you try Formatting Name node and
>> Datanode?
>> > > > >
>> > > > > On Sun, Sep 26, 2010 at 2:12 AM, He Chen <airbots@gmail.com>
>> wrote:
>> > > > >
>> > > > > > Hello everyone
>> > > > > >
>> > > > > > I can not load local file to HDFS. It gave the following error=
s.
>> > > > > >
>> > > > > > WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor excepti=
on
>> >  for
>> > > > > block
>> > > > > > blk_-236192853234282209_419415java.io.EOFException
>> > > > > >        at
>> > java.io.DataInputStream.readFully(DataInputStream.java:197)
>> > > > > >        at
>> > java.io.DataInputStream.readLong(DataInputStream.java:416)
>> > > > > >        at
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(D=
FSClient.java:2397)
>> > > > > > 10/09/25 15:38:25 WARN hdfs.DFSClient: Error Recovery for bloc=
k
>> > > > > > blk_-236192853234282209_419415 bad datanode[0]
>> 192.168.0.23:50010
>> > > > > > 10/09/25 15:38:25 WARN hdfs.DFSClient: Error Recovery for bloc=
k
>> > > > > > blk_-236192853234282209_419415 in pipeline 192.168.0.23:50010,
>> > > > > > 192.168.0.39:50010: bad datanode 192.168.0.23:50010
>> > > > > > Any response will be appreciated!
>> > > > > >
>> > > > > >
>> > >
>> >
>>
>
>
>
> --
> Best Wishes=EF=BC=81
> =E9=A1=BA=E9=80=81=E5=95=86=E7=A5=BA=EF=BC=81
>
> =EF=BC=8D=EF=BC=8D
> Chen He
> (402)613-9298
> PhD. student of CSE Dept.
> Research Assistant of Holland Computing Center
> University of Nebraska-Lincoln
> Lincoln NE 68588
>


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic