[prev in list] [next in list] [prev in thread] [next in thread] 

List:       hadoop-user
Subject:    Re: HDFS fsck command giving health as corrupt for '/'
From:       Ravi Prakash <ravihadoop () gmail ! com>
Date:       2017-02-16 21:21:21
Message-ID: CAMs9kVgVJW9cK6MNDcX5+Sr5d5xCfmQ0kF0xFN58zfyTMbRopQ () mail ! gmail ! com
[Download RAW message or body]

Hi Nishant!

I'd suggest reading the HDFS user guide to begin with and becoming familiar
with the architecture.
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html
                
.

Where are the blocks stored on the datanodes? Were they on persistent
storage on the EC2 instances or ephemeral? Can you log on to the the
datanodes and find "blk_*" and their corresponding "blk_*" files?

e.g. You can identify the locations of an HDFS file using this command:
HADOOP_USER_NAME=hdfs hdfs fsck <SOME_FILE_IN_HDFS> -files -blocks
-locations
If you have Kerberos turned on, then you'd have to get the super-user
credentials and run the command as the super-user.

If there are no datanodes in the list, that means *no datanodes* have
reported the block. NOTE: On startup the Namenode doesn't know where a
block is stored. It only has a mapping from an HDFS file to the blocks. The
Datanodes are the ones that report a block to the Namenode and then the
Namenode remembers (every startup) where to locate the block.

HTH
Ravi


On Wed, Feb 15, 2017 at 11:53 PM, Nishant Verma <nishant.verma0702@gmail.com
> wrote:

> Hi Philippe
> 
> Yes, I did. I restarted NameNode and other daemons multiple times.
> I found that all my files had got corrupted somehow. I was able to fix the
> issue by running below command:
> 
> hdfs fsck / | egrep -v '^\.+$' | grep -v replica | grep -v Replica
> 
> But it deleted all the files from my cluster. Only the directory
> structures were left.
> 
> My main concern is how did this issue happen and how to prevent it in
> future from happening?
> 
> Regards
> Nishant
> 
> Nishant
> 
> sent from handheld device. please ignore typos.
> 
> On Wed, Feb 15, 2017 at 3:01 PM, Philippe Kernévez <pkernevez@octo.com>
> wrote:
> 
> > Hi Nishant,
> > 
> > You namenode are probably unable to comunicate with your datanode. Did
> > you restart all the HDFS services ?
> > 
> > Regards,
> > Philipp
> > 
> > On Tue, Feb 14, 2017 at 10:43 AM, Nishant Verma <
> > nishant.verma0702@gmail.com> wrote:
> > 
> > > Hi
> > > 
> > > I have open source hadoop version 2.7.3 cluster (2 Masters + 3 Slaves)
> > > installed on AWS EC2 instances. I am using the cluster to integrate it with
> > > Kafka Connect.
> > > 
> > > The setup of cluster was done last month and setup of kafka connect was
> > > completed last fortnight. Since then, we were able to operate the kafka
> > > topic records on our HDFS and do various operations.
> > > 
> > > Since last afternoon, I find that any kafka topic is not getting
> > > committed to the cluster. When I tried to open the older files, I started
> > > getting below error. When I copy a new file to the cluster from local, it
> > > comes and gets opened but after some time, again starts showing similar
> > > IOException:
> > > 
> > > 17/02/14 07:57:55 INFO hdfs.DFSClient: No node available for \
> > > BP-1831277630-10.16.37.124-1484306078618 \
> > > <(430)%20607-8618>:blk_1073793876_55013 file=/test/inputdata/derby.log 17/02/14 \
> > > 07:57:55 INFO hdfs.DFSClient: Could not obtain \
> > > BP-1831277630-10.16.37.124-1484306078618 \
> > > <(430)%20607-8618>:blk_1073793876_55013 from any node: java.io.IOException: No \
> > > live nodes contain block BP-1831277630-10.16.37.124-1484306078618 \
> > > <(430)%20607-8618>:blk_1073793876_55013 after checking nodes = [], ignoredNodes \
> > > = null No live nodes contain current block Block locations: Dead nodes: . Will \
> > > get new block locations from namenode and retry... 17/02/14 07:57:55 WARN \
> > > hdfs.DFSClient: DFS chooseDataNode: got # 1 IOException, will wait for \
> > > 499.3472970548959 msec. 17/02/14 07:57:55 INFO hdfs.DFSClient: No node \
> > > available for BP-1831277630-10.16.37.124-1484306078618 \
> > > <(430)%20607-8618>:blk_1073793876_55013 file=/test/inputdata/derby.log 17/02/14 \
> > > 07:57:55 INFO hdfs.DFSClient: Could not obtain \
> > > BP-1831277630-10.16.37.124-1484306078618 \
> > > <(430)%20607-8618>:blk_1073793876_55013 from any node: java.io.IOException: No \
> > > live nodes contain block BP-1831277630-10.16.37.124-1484306078618 \
> > > <(430)%20607-8618>:blk_1073793876_55013 after checking nodes = [], ignoredNodes \
> > > = null No live nodes contain current block Block locations: Dead nodes: . Will \
> > > get new block locations from namenode and retry... 17/02/14 07:57:55 WARN \
> > > hdfs.DFSClient: DFS chooseDataNode: got # 2 IOException, will wait for \
> > > 4988.873277172643 msec. 17/02/14 07:58:00 INFO hdfs.DFSClient: No node \
> > > available for BP-1831277630-10.16.37.124-1484306078618 \
> > > <(430)%20607-8618>:blk_1073793876_55013 file=/test/inputdata/derby.log 17/02/14 \
> > > 07:58:00 INFO hdfs.DFSClient: Could not obtain \
> > > BP-1831277630-10.16.37.124-1484306078618 \
> > > <(430)%20607-8618>:blk_1073793876_55013 from any node: java.io.IOException: No \
> > > live nodes contain block BP-1831277630-10.16.37.124-1484306078618 \
> > > <(430)%20607-8618>:blk_1073793876_55013 after checking nodes = [], ignoredNodes \
> > > = null No live nodes contain current block Block locations: Dead nodes: . Will \
> > > get new block locations from namenode and retry... 17/02/14 07:58:00 WARN \
> > > hdfs.DFSClient: DFS chooseDataNode: got # 3 IOException, will wait for \
> > > 8598.311122824263 msec. 17/02/14 07:58:09 WARN hdfs.DFSClient: Could not obtain \
> > > block: BP-1831277630-10.16.37.124-1484306078618 \
> > > <(430)%20607-8618>:blk_1073793876_55013 file=/test/inputdata/derby.log No live \
> > > nodes contain current block Block locations: Dead nodes: . Throwing a \
> > > BlockMissingException 17/02/14 07:58:09 WARN hdfs.DFSClient: Could not obtain \
> > > block: BP-1831277630-10.16.37.124-1484306078618 \
> > > <(430)%20607-8618>:blk_1073793876_55013 file=/test/inputdata/derby.log No live \
> > > nodes contain current block Block locations: Dead nodes: . Throwing a \
> > > BlockMissingException 17/02/14 07:58:09 WARN hdfs.DFSClient: DFS Read
> > > org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: \
> > > BP-1831277630-10.16.37.124-1484306078618 \
> > > <(430)%20607-8618>:blk_1073793876_55013 file=/test/inputdata/derby.log at \
> > > org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:983) \
> > > at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:642) \
> > > at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
> > >  at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
> > > at java.io.DataInputStream.read(DataInputStream.java:100)
> > > at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
> > > at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> > > at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> > > at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:107)
> > > at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:102)
> > > at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> > > at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> > > at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> > > at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> > > at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> > > at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> > > at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> > > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> > > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> > > at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> > > cat: Could not obtain block: BP-1831277630-10.16.37.124-1484306078618 \
> > > <(430)%20607-8618>:blk_1073793876_55013 file=/test/inputdata/derby.log 
> > > When I do : hdfs fsck / , I get:
> > > 
> > > Total size:    667782677 B
> > > Total dirs:    406
> > > Total files:   44485
> > > Total symlinks:                0
> > > Total blocks (validated):      43767 (avg. block size 15257 B)
> > > ********************************
> > > UNDER MIN REPL'D BLOCKS:      43766 (99.99772 %)
> > > dfs.namenode.replication.min: 1
> > > CORRUPT FILES:        43766
> > > MISSING BLOCKS:       43766
> > > MISSING SIZE:         667781648 B
> > > CORRUPT BLOCKS:       43766
> > > ********************************
> > > Minimally replicated blocks:   1 (0.0022848265 %)
> > > Over-replicated blocks:        0 (0.0 %)
> > > Under-replicated blocks:       0 (0.0 %)
> > > Mis-replicated blocks:         0 (0.0 %)
> > > Default replication factor:    3
> > > Average block replication:     6.8544796E-5
> > > Corrupt blocks:                43766
> > > Missing replicas:              0 (0.0 %)
> > > Number of data-nodes:          3
> > > Number of racks:               1
> > > FSCK ended at Tue Feb 14 07:59:10 UTC 2017 in 932 milliseconds
> > > 
> > > 
> > > The filesystem under path '/' is CORRUPT
> > > 
> > > That means, all my files got corrupted somehow.
> > > 
> > > I want to recover my HDFS and fix the corrupt health status. Also, I
> > > would like to understand, how such an issue occurred suddenly and how to
> > > prevent it in future?
> > > 
> > > 
> > > Thanks
> > > 
> > > Nishant Verma
> > > 
> > 
> > 
> > 
> > --
> > Philippe Kernévez
> > 
> > 
> > 
> > Directeur technique (Suisse),
> > pkernevez@octo.com
> > +41 79 888 33 32 <+41%2079%20888%2033%2032>
> > 
> > Retrouvez OCTO sur OCTO Talk : http://blog.octo.com
> > OCTO Technology http://www.octo.com
> > 
> 
> 


[Attachment #3 (text/html)]

<div dir="ltr"><div><div><div><div><div><div><div>Hi Nishant!<br><br></div>I&#39;d \
suggest reading the HDFS user guide to begin with and becoming familiar with the \
architecture. <a href="https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoo \
p-hdfs/HdfsUserGuide.html">https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html</a> \
. <br><br></div>Where are the blocks stored on the datanodes? Were they on persistent \
storage on the EC2 instances or ephemeral? Can you log on to the the datanodes and \
find &quot;blk_*&quot; and their corresponding &quot;blk_*&quot; \
files?<br><br></div>e.g. You can identify the locations of an HDFS file using this \
command:<br>HADOOP_USER_NAME=hdfs hdfs fsck &lt;SOME_FILE_IN_HDFS&gt; -files -blocks \
-locations<br></div>If you have Kerberos turned on, then you&#39;d have to get the \
super-user credentials and run the command as the super-user.<br><br></div>If there \
are no datanodes in the list, that means *no datanodes* have reported the block. \
NOTE: On startup the Namenode doesn&#39;t know where a block is stored. It only has a \
mapping from an HDFS file to the blocks. The Datanodes are the ones that report a \
block to the Namenode and then the Namenode remembers (every startup) where to locate \
the block.<br><br></div>HTH<br></div>Ravi<br><div><div><div><div><br></div></div></div></div></div><div \
class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 15, 2017 at 11:53 PM, \
Nishant Verma <span dir="ltr">&lt;<a href="mailto:nishant.verma0702@gmail.com" \
target="_blank">nishant.verma0702@gmail.com</a>&gt;</span> wrote:<br><blockquote \
class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex"><div dir="ltr"><div class="gmail_default" \
style="font-family:&quot;comic sans ms&quot;,sans-serif"><div class="gmail_default" \
style="font-size:12.8px">Hi Philippe</div><div class="gmail_default" \
style="font-size:12.8px"><br></div><div class="gmail_default" \
style="font-size:12.8px">Yes, I did. I restarted NameNode and other daemons multiple \
times.  </div><div class="gmail_default" style="font-size:12.8px">I found that all my \
files had got corrupted somehow. I was able to fix the issue by running below \
command:</div><div class="gmail_default" \
style="font-family:arial,sans-serif;font-size:12.8px"><font face="comic sans ms, \
sans-serif"><br></font></div><div class="gmail_default" \
style="font-family:arial,sans-serif;font-size:12.8px"><font face="comic sans ms, \
sans-serif">hdfs fsck / | egrep -v &#39;^\.+$&#39; | grep -v replica | grep -v \
Replica</font><br></div><div class="gmail_default" \
style="font-family:arial,sans-serif;font-size:12.8px"><font face="comic sans ms, \
sans-serif"><br></font></div><div class="gmail_default" \
style="font-family:arial,sans-serif;font-size:12.8px"><font face="comic sans ms, \
sans-serif">But it deleted all the files from my cluster. Only the directory \
structures were left.  </font></div><div class="gmail_default" \
style="font-family:arial,sans-serif;font-size:12.8px"><font face="comic sans ms, \
sans-serif"><br></font></div><div class="gmail_default" \
style="font-family:arial,sans-serif;font-size:12.8px"><font face="comic sans ms, \
sans-serif">My main concern is how did this issue happen and how to prevent it in \
future from happening?</font></div><div class="gmail_default" \
style="font-family:arial,sans-serif;font-size:12.8px"><font face="comic sans ms, \
sans-serif"><br></font></div><div class="gmail_default" \
style="font-family:arial,sans-serif;font-size:12.8px"><font face="comic sans ms, \
sans-serif">Regards</font></div><div class="gmail_default" \
style="font-family:arial,sans-serif;font-size:12.8px"><font face="comic sans ms, \
sans-serif">Nishant</font></div></div></div><div class="gmail_extra"><br \
clear="all"><div><div class="m_3112448943046158837gmail_signature" \
data-smartmail="gmail_signature"><div dir="ltr">Nishant<div><br></div><div>sent from \
handheld device. please ignore typos.</div></div></div></div><div><div class="h5"> \
<br><div class="gmail_quote">On Wed, Feb 15, 2017 at 3:01 PM, Philippe Kernévez \
<span dir="ltr">&lt;<a href="mailto:pkernevez@octo.com" \
target="_blank">pkernevez@octo.com</a>&gt;</span> wrote:<br><blockquote \
class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex"><div dir="ltr">Hi Nishant,<div><br></div><div>You namenode \
are probably unable to comunicate with your datanode. Did you restart all the HDFS \
services ?<div><br></div><div>Regards,</div><div>Philipp</div></div></div><div \
class="gmail_extra"><div><div class="m_3112448943046158837h5"><br><div \
class="gmail_quote">On Tue, Feb 14, 2017 at 10:43 AM, Nishant Verma <span \
dir="ltr">&lt;<a href="mailto:nishant.verma0702@gmail.com" \
target="_blank">nishant.verma0702@gmail.com</a>&gt;</span> wrote:<br><blockquote \
class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex"><div dir="ltr"><div class="gmail_default" \
style="font-family:&quot;comic sans ms&quot;,sans-serif"><p style="margin:0px 0px \
1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(36,39,41);font-family:arial,&quot;helvetica \
neue&quot;,helvetica,sans-serif">Hi</p><p style="margin:0px 0px \
1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(36,39,41);font-family:arial,&quot;helvetica \
neue&quot;,helvetica,sans-serif">I have open source hadoop version 2.7.3 cluster (2 \
Masters + 3 Slaves) installed on AWS EC2 instances. I am using the cluster to \
integrate it with Kafka Connect.<br></p><p style="margin:0px 0px \
1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(36,39,41);font-family:arial,&quot;helvetica \
neue&quot;,helvetica,sans-serif">The setup of cluster was done last month and setup \
of kafka connect was completed last fortnight. Since then, we were able to operate \
the kafka topic records on our HDFS and do various operations.</p><p \
style="margin:0px 0px \
1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(36,39,41);font-family:arial,&quot;helvetica \
neue&quot;,helvetica,sans-serif">Since last afternoon, I find that any kafka topic is \
not getting committed to the cluster. When I tried to open the older files, I started \
getting below error. When I copy a new file to the cluster from local, it comes and \
gets opened but after some time, again starts showing similar IOException:</p><pre \
style="margin-top:0px;margin-bottom:1em;padding:5px;border:0px;font-size:13px;width:auto;max-height:600px;overflow:auto;font-family:consolas,menlo,monaco,&quot;lucida \
console&quot;,&quot;liberation mono&quot;,&quot;dejavu sans \
mono&quot;,&quot;bitstream vera sans mono&quot;,&quot;courier \
new&quot;,monospace,sans-serif;background-color:rgb(239,240,241);word-wrap:normal;color:rgb(36,39,41)"><code \
style="margin:0px;padding:0px;border:0px;font-family:consolas,menlo,monaco,&quot;lucida \
console&quot;,&quot;liberation mono&quot;,&quot;dejavu sans \
mono&quot;,&quot;bitstream vera sans mono&quot;,&quot;courier \
new&quot;,monospace,sans-serif;white-space:inherit">17/02/14 07:57:55 INFO \
hdfs.DFSClient: No node available for BP-1831277630-10.16.37.124-148<a \
href="tel:(430)%20607-8618" value="+14306078618" \
target="_blank"><wbr>4306078618</a>:blk_1073793876_5501<wbr>3 \
file=/test/inputdata/derby.log 17/02/14 07:57:55 INFO hdfs.DFSClient: Could not \
obtain BP-1831277630-10.16.37.124-148<a href="tel:(430)%20607-8618" \
value="+14306078618" target="_blank"><wbr>4306078618</a>:blk_1073793876_5501<wbr>3 \
from any node: java.io.IOException: No live nodes contain block \
BP-1831277630-10.16.37.124-148<a href="tel:(430)%20607-8618" value="+14306078618" \
target="_blank"><wbr>4306078618</a>:blk_1073793876_5501<wbr>3 after checking nodes = \
[], ignoredNodes = null No live nodes contain current block Block locations: Dead \
nodes: . Will get new block locations from namenode and retry... 17/02/14 07:57:55 \
WARN hdfs.DFSClient: DFS chooseDataNode: got # 1 IOException, will wait for \
499.3472970548959 msec. 17/02/14 07:57:55 INFO hdfs.DFSClient: No node available for \
BP-1831277630-10.16.37.124-148<a href="tel:(430)%20607-8618" value="+14306078618" \
target="_blank"><wbr>4306078618</a>:blk_1073793876_5501<wbr>3 \
file=/test/inputdata/derby.log 17/02/14 07:57:55 INFO hdfs.DFSClient: Could not \
obtain BP-1831277630-10.16.37.124-148<a href="tel:(430)%20607-8618" \
value="+14306078618" target="_blank"><wbr>4306078618</a>:blk_1073793876_5501<wbr>3 \
from any node: java.io.IOException: No live nodes contain block \
BP-1831277630-10.16.37.124-148<a href="tel:(430)%20607-8618" value="+14306078618" \
target="_blank"><wbr>4306078618</a>:blk_1073793876_5501<wbr>3 after checking nodes = \
[], ignoredNodes = null No live nodes contain current block Block locations: Dead \
nodes: . Will get new block locations from namenode and retry... 17/02/14 07:57:55 \
WARN hdfs.DFSClient: DFS chooseDataNode: got # 2 IOException, will wait for \
4988.873277172643 msec. 17/02/14 07:58:00 INFO hdfs.DFSClient: No node available for \
BP-1831277630-10.16.37.124-148<a href="tel:(430)%20607-8618" value="+14306078618" \
target="_blank"><wbr>4306078618</a>:blk_1073793876_5501<wbr>3 \
file=/test/inputdata/derby.log 17/02/14 07:58:00 INFO hdfs.DFSClient: Could not \
obtain BP-1831277630-10.16.37.124-148<a href="tel:(430)%20607-8618" \
value="+14306078618" target="_blank"><wbr>4306078618</a>:blk_1073793876_5501<wbr>3 \
from any node: java.io.IOException: No live nodes contain block \
BP-1831277630-10.16.37.124-148<a href="tel:(430)%20607-8618" value="+14306078618" \
target="_blank"><wbr>4306078618</a>:blk_1073793876_5501<wbr>3 after checking nodes = \
[], ignoredNodes = null No live nodes contain current block Block locations: Dead \
nodes: . Will get new block locations from namenode and retry... 17/02/14 07:58:00 \
WARN hdfs.DFSClient: DFS chooseDataNode: got # 3 IOException, will wait for \
8598.311122824263 msec. 17/02/14 07:58:09 WARN hdfs.DFSClient: Could not obtain \
block: BP-1831277630-10.16.37.124-148<a href="tel:(430)%20607-8618" \
value="+14306078618" target="_blank"><wbr>4306078618</a>:blk_1073793876_5501<wbr>3 \
file=/test/inputdata/derby.log No live nodes contain current block Block locations: \
Dead nodes: . Throwing a BlockMissingException 17/02/14 07:58:09 WARN hdfs.DFSClient: \
Could not obtain block: BP-1831277630-10.16.37.124-148<a href="tel:(430)%20607-8618" \
value="+14306078618" target="_blank"><wbr>4306078618</a>:blk_1073793876_5501<wbr>3 \
file=/test/inputdata/derby.log No live nodes contain current block Block locations: \
Dead nodes: . Throwing a BlockMissingException 17/02/14 07:58:09 WARN hdfs.DFSClient: \
DFS Read org.apache.hadoop.hdfs.BlockMi<wbr>ssingException: Could not obtain block: \
BP-1831277630-10.16.37.124-148<a href="tel:(430)%20607-8618" value="+14306078618" \
target="_blank"><wbr>4306078618</a>:blk_1073793876_5501<wbr>3 \
                file=/test/inputdata/derby.log
        at org.apache.hadoop.hdfs.DFSInpu<wbr>tStream.chooseDataNode(DFSInpu<wbr>tStream.java:983)
                
        at org.apache.hadoop.hdfs.DFSInpu<wbr>tStream.blockSeekTo(DFSInputSt<wbr>ream.java:642)
                
        at org.apache.hadoop.hdfs.DFSInpu<wbr>tStream.readWithStrategy(DFSIn<wbr>putStream.java:882)
                
        at org.apache.hadoop.hdfs.DFSInpu<wbr>tStream.read(DFSInputStream.ja<wbr>va:934)
  at java.io.DataInputStream.read(D<wbr>ataInputStream.java:100)
        at org.apache.hadoop.io.IOUtils.c<wbr>opyBytes(IOUtils.java:85)
        at org.apache.hadoop.io.IOUtils.c<wbr>opyBytes(IOUtils.java:59)
        at org.apache.hadoop.io.IOUtils.c<wbr>opyBytes(IOUtils.java:119)
        at org.apache.hadoop.fs.shell.Dis<wbr>play$Cat.printToStdout(Display<wbr>.java:107)
                
        at org.apache.hadoop.fs.shell.Dis<wbr>play$Cat.processPath(Display.j<wbr>ava:102)
                
        at <a href="http://org.apache.hadoop.fs.shell.Com" \
target="_blank">org.apache.hadoop.fs.shell.Com</a><wbr>mand.processPaths(Command.java<wbr>:317)
                
        at <a href="http://org.apache.hadoop.fs.shell.Com" \
target="_blank">org.apache.hadoop.fs.shell.Com</a><wbr>mand.processPathArgument(Comma<wbr>nd.java:289)
                
        at <a href="http://org.apache.hadoop.fs.shell.Com" \
target="_blank">org.apache.hadoop.fs.shell.Com</a><wbr>mand.processArgument(Command.j<wbr>ava:271)
                
        at <a href="http://org.apache.hadoop.fs.shell.Com" \
target="_blank">org.apache.hadoop.fs.shell.Com</a><wbr>mand.processArguments(Command.<wbr>java:255)
                
        at <a href="http://org.apache.hadoop.fs.shell.Com" \
target="_blank">org.apache.hadoop.fs.shell.Com</a><wbr>mand.processRawArguments(Comma<wbr>nd.java:201)
                
        at <a href="http://org.apache.hadoop.fs.shell.Com" \
target="_blank">org.apache.hadoop.fs.shell.Com</a><wbr>mand.run(Command.java:165)  at \
org.apache.hadoop.fs.FsShell.r<wbr>un(FsShell.java:287)  at \
org.apache.hadoop.util.ToolRun<wbr>ner.run(ToolRunner.java:70)  at \
org.apache.hadoop.util.ToolRun<wbr>ner.run(ToolRunner.java:84)  at \
                org.apache.hadoop.fs.FsShell.m<wbr>ain(FsShell.java:340)
cat: Could not obtain block: BP-1831277630-10.16.37.124-148<a \
href="tel:(430)%20607-8618" value="+14306078618" \
target="_blank"><wbr>4306078618</a>:blk_1073793876_5501<wbr>3 \
file=/test/inputdata/derby.log </code></pre><p style="margin:0px 0px \
1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(36,39,41);font-family:arial,&quot;helvetica \
neue&quot;,helvetica,sans-serif">When I do : hdfs fsck / , I get:</p><pre \
style="margin-top:0px;margin-bottom:1em;padding:5px;border:0px;font-size:13px;width:auto;max-height:600px;overflow:auto;font-family:consolas,menlo,monaco,&quot;lucida \
console&quot;,&quot;liberation mono&quot;,&quot;dejavu sans \
mono&quot;,&quot;bitstream vera sans mono&quot;,&quot;courier \
new&quot;,monospace,sans-serif;background-color:rgb(239,240,241);word-wrap:normal;color:rgb(36,39,41)"><code \
style="margin:0px;padding:0px;border:0px;font-family:consolas,menlo,monaco,&quot;lucida \
console&quot;,&quot;liberation mono&quot;,&quot;dejavu sans \
mono&quot;,&quot;bitstream vera sans mono&quot;,&quot;courier \
new&quot;,monospace,sans-serif;white-space:inherit">Total size:    667782677 B  Total \
dirs:    406  Total files:   44485
 Total symlinks:                0
 Total blocks (validated):      43767 (avg. block size 15257 B)
  ******************************<wbr>**
  UNDER MIN REPL&#39;D BLOCKS:      43766 (99.99772 %)
  dfs.namenode.replication.min: 1
  CORRUPT FILES:        43766
  MISSING BLOCKS:       43766
  MISSING SIZE:         667781648 B
  CORRUPT BLOCKS:       43766
  ******************************<wbr>**
 Minimally replicated blocks:   1 (0.0022848265 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     6.8544796E-5
 Corrupt blocks:                43766
 Missing replicas:              0 (0.0 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Tue Feb 14 07:59:10 UTC 2017 in 932 milliseconds


The filesystem under path &#39;/&#39; is CORRUPT
</code></pre><p style="margin:0px 0px \
1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(36,39,41);font-family:arial,&quot;helvetica \
neue&quot;,helvetica,sans-serif">That means, all my files got corrupted \
somehow.</p><p style="margin:0px 0px \
1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(36,39,41);font-family:arial,&quot;helvetica \
neue&quot;,helvetica,sans-serif">I want to recover my HDFS and fix the corrupt health \
status. Also, I would like to understand, how such an issue occurred suddenly and how \
to prevent it in future?</p><p style="margin:0px 0px \
1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(36,39,41);font-family:arial,&quot;helvetica \
neue&quot;,helvetica,sans-serif"><br></p><p style="margin:0px 0px \
1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(36,39,41);font-family:arial,&quot;helvetica \
neue&quot;,helvetica,sans-serif">Thanks</p><span \
class="m_3112448943046158837m_6693761048948745388HOEnZb"><font color="#888888"><p \
style="margin:0px 0px \
1em;padding:0px;border:0px;font-size:15px;clear:both;color:rgb(36,39,41);font-family:arial,&quot;helvetica \
neue&quot;,helvetica,sans-serif">Nishant Verma</p></font></span></div> </div>
</blockquote></div><br><br clear="all"><div><br></div></div></div><span \
class="m_3112448943046158837HOEnZb"><font color="#888888">-- <br><div \
class="m_3112448943046158837m_6693761048948745388gmail_signature" \
data-smartmail="gmail_signature"> Philippe Kernévez<br><br><br><br>Directeur \
technique (Suisse), <br><a href="mailto:pkernevez@octo.com" \
target="_blank">pkernevez@octo.com</a><br><a href="tel:+41%2079%20888%2033%2032" \
value="+41798883332" target="_blank">+41 79 888 33 32</a><br> <br>Retrouvez OCTO sur \
OCTO Talk : <a href="http://blog.octo.com" \
target="_blank">http://blog.octo.com</a><br>OCTO Technology <a \
href="http://www.octo.com" target="_blank">http://www.octo.com</a>

</div>
</font></span></div>
</blockquote></div><br></div></div></div>
</blockquote></div><br></div>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic