[prev in list] [next in list] [prev in thread] [next in thread] 

List:       hadoop-user
Subject:    Re: Data doesn't write in HDFS
From:       Alexander Alten-Lorenz <wget.null () gmail ! com>
Date:       2015-03-27 11:06:53
Message-ID: 7D625B53-B836-43B5-88F6-F234DFB1041E () gmail ! com
[Download RAW message or body]

Hi 

Have a closer look at:

java.io.IOException: File /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp \
could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 \
datanode(s) running and 1 node(s) are excluded in this operation.

BR,
 AL


> On 27 Mar 2015, at 05:48, Ramesh Rocky <rmshkumar362@outlook.com> wrote:
> 
> Hi,
> 
> I try to write the data in hdfs using flume on windows machine. Here I configure \
> flume and hadoop on same machine and write data into hdfs its works perfectly. 
> But configure hadoop and flume on different machines (both are windows machines). I \
> try to write data in hdfs it shows the following error. 
> 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available for user \
> SYSTEM 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available \
> for user SYSTEM 15/03/27 09:46:35 WARN security.UserGroupInformation: No groups \
> available for user SYSTEM 15/03/27 09:46:36 INFO namenode.FSEditLog: Number of \
> transactions: 2 Total time for transactions(ms): 28 Number of transactions batched \
> in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 46 15/03/27 09:46:37 WARN \
> security.UserGroupInformation: No groups available for user SYSTEM 15/03/27 \
> 09:46:39 INFO hdfs.StateChange: BLOCK* allocateBlock: \
> /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp. \
> BP-412829692-192.168.56.1-1427371070417 \
> blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, \
> replicas=[ReplicaUnderConstruction[[DISK]DS-57962794-b57c-476e-a811-ebcf871f4f12:NORMAL:192.168.56.1:50010|RBW]]}
>  15/03/27 09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to place \
> enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], \
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], \
> replicationFallbacks=[ARCHIVE]}, newBlock=true)  For more information, please \
> enable DEBUG log level on \
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy 15/03/27 \
> 09:46:42 WARN protocol.BlockStoragePolicy: Failed to place enough replicas: \
> expected size is 1 but only 0 storage types can be selected (replication=1, \
> selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, \
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) \
> 15/03/27 09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough \
> replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storage \
> Policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], \
> replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are \
>                 unava
> ilable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, \
> storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 15/03/27 \
> 09:46:42 INFO ipc.Server: IPC Server handler 9 on 9000, call \
> org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 192.168.15.242:57416 \
>                 Call#7 Retry#0
> java.io.IOException: File /testing/syncfusion/C#/JMS_message.1427429746967.log.tmp \
> could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 \
> datanode(s) running and 1 node(s) are excluded in this operation. at \
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
>  at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
>  at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
>  at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
>  at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> 15/03/27 09:46:46 WARN namenode.FSNamesystem: trying to get DT with no secret \
> manager running 
> Please anybody know about this issue..
> Thanks & Regards
> Ramesh


[Attachment #3 (unknown)]

<html><head><meta http-equiv="Content-Type" content="text/html \
charset=iso-8859-1"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: \
space; -webkit-line-break: after-white-space;" class=""><div \
class="">Hi&nbsp;</div><div class=""><br class=""></div><div class="">Have a closer \
look at:</div><div class=""><br class=""></div><div class=""><span \
style="font-family: Calibri; font-size: 16px;" class="">java.io.IOException: File \
/testing/syncfusion/C#/JMS_message.1427429746967.log.tmp <b class="">could only be \
replicated to 0 nodes instead of minReplication (=1). &nbsp;There are 1 datanode(s) \
running and 1 node(s) are excluded in this operation</b>.</span></div><div \
class=""><span style="font-family: Calibri; font-size: 16px;" class=""><br \
class=""></span></div><div class=""><span style="font-family: Calibri; font-size: \
16px;" class="">BR,</span></div><div class=""><span style="font-family: Calibri; \
font-size: 16px;" class="">&nbsp;AL</span></div><div class=""><br class=""></div><br \
class=""><div><blockquote type="cite" class=""><div class="">On 27 Mar 2015, at \
05:48, Ramesh Rocky &lt;<a href="mailto:rmshkumar362@outlook.com" \
class="">rmshkumar362@outlook.com</a>&gt; wrote:</div><br \
class="Apple-interchange-newline"><div class=""><div dir="ltr" style="font-family: \
Calibri; font-size: 16px; font-style: normal; font-variant: normal; font-weight: \
normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: \
start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; \
word-spacing: 0px; -webkit-text-stroke-width: 0px;" class="">Hi,<div class=""><br \
class=""></div><div class="">I try to write the data in hdfs using flume on windows \
machine. Here I&nbsp;<span style="font-size: 12pt;" class="">configure flume and \
hadoop on same machine and write data into hdfs its works perfectly.</span></div><div \
class=""><span style="font-size: 12pt;" class=""><br class=""></span></div><div \
class="">But configure hadoop and flume on different machines (both are windows \
machines). I try to write data in hdfs it shows the following error.</div><div \
class=""><br class=""></div><div class=""><div class="">15/03/27 09:46:35 WARN \
security.UserGroupInformation: No groups available for user SYSTEM</div><div \
class="">15/03/27 09:46:35 WARN security.UserGroupInformation: No groups available \
for user SYSTEM</div><div class="">15/03/27 09:46:35 WARN \
security.UserGroupInformation: No groups available for user SYSTEM</div><div \
class="">15/03/27 09:46:36 INFO namenode.FSEditLog: Number of transactions: 2 Total \
time for transactions(ms): 28 Number of transactions batched in Syncs: 0 Number of \
syncs: 2 SyncTimes(ms): 46</div><div class="">15/03/27 09:46:37 WARN \
security.UserGroupInformation: No groups available for user SYSTEM</div><div \
class="">15/03/27 09:46:39 INFO hdfs.StateChange: BLOCK* allocateBlock: \
/testing/syncfusion/C#/JMS_message.1427429746967.log.tmp. \
BP-412829692-192.168.56.1-1427371070417</div><div \
class="">&nbsp;blk_1073741836_1012{blockUCState=UNDER_CONSTRUCTION, \
primaryNodeIndex=-1, \
replicas=[ReplicaUnderConstruction[[DISK]DS-57962794-b57c-476e-a811-ebcf871f4f12:NORMAL:192.168.56.1:50010|RBW]]}</div><div \
class="">15/03/27 09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to place \
enough replicas, still in need of 1 to reach 1 (unavailableStorages=[],</div><div \
class="">storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], \
creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true)&nbsp;</div><div \
class="">For more information, please enable DEBUG log level on \
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy</div><div \
class="">15/03/27 09:46:42 WARN protocol.BlockStoragePolicy: Failed to place enough \
replicas: expected size is 1 but only 0 storage types can be selected \
(replication=1,</div><div class="">&nbsp;selected=[], unavailable=[DISK], \
removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], \
creationFallbacks=[], replicationFallbacks=[ARCHIVE]})</div><div class="">15/03/27 \
09:46:42 WARN blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, \
still in need of 1 to reach 1 (unavailableStorages=[DISK], storage</div><div \
class="">Policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], \
replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are \
unava</div><div class="">ilable: &nbsp;unavailableStorages=[DISK], \
storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], \
replicationFallbacks=[ARCHIVE]}</div><div class="">15/03/27 09:46:42 INFO ipc.Server: \
IPC Server handler 9 on 9000, call \
org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 192.168.15.242:57416 \
Call#7 Retry#0</div><div class="">java.io.IOException: File \
/testing/syncfusion/C#/JMS_message.1427429746967.log.tmp could only be replicated to \
0 nodes instead of minReplication (=1). &nbsp;There are 1 datanode(s) running and 1 \
node(s) are excluded in this operation.</div><div class="">&nbsp; &nbsp; &nbsp; \
&nbsp; at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)</div><div \
class="">&nbsp; &nbsp; &nbsp; &nbsp; at \
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)</div><div \
class="">&nbsp; &nbsp; &nbsp; &nbsp; at \
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)</div><div \
class="">&nbsp; &nbsp; &nbsp; &nbsp; at \
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)</div><div \
class="">&nbsp; &nbsp; &nbsp; &nbsp; at \
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)</div><div \
class="">&nbsp; &nbsp; &nbsp; &nbsp; at \
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)</div><div \
class="">&nbsp; &nbsp; &nbsp; &nbsp; at \
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)</div><div class="">&nbsp; &nbsp; \
&nbsp; &nbsp; at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)</div><div \
class="">&nbsp; &nbsp; &nbsp; &nbsp; at \
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)</div><div \
class="">&nbsp; &nbsp; &nbsp; &nbsp; at \
java.security.AccessController.doPrivileged(Native Method)</div><div class="">&nbsp; \
&nbsp; &nbsp; &nbsp; at javax.security.auth.Subject.doAs(Subject.java:415)</div><div \
class="">&nbsp; &nbsp; &nbsp; &nbsp; at \
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)</div><div \
class="">&nbsp; &nbsp; &nbsp; &nbsp; at \
org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)</div><div \
class="">15/03/27 09:46:46 WARN namenode.FSNamesystem: trying to get DT with no \
secret manager running</div><div class=""><br class=""></div><div class="">Please \
anybody know about this issue..</div>Thanks &amp; Regards<div \
class="">Ramesh</div></div></div></div></blockquote></div><br class=""></body></html>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic