Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

HDFS error: could only be replicated to 0 nodes, instead of 1

I've created a ubuntu single node hadoop cluster in EC2.

Testing a simple file upload to hdfs works from the EC2 machine, but doesn't work from a machine outside of EC2.

I can browse the the filesystem through the web interface from the remote machine, and it shows one datanode which is reported as in service. Have opened all tcp ports in the security from 0 to 60000(!) so I don't think it's that.

I get the error

java.io.IOException: File /user/ubuntu/pies could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1448) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:690) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:342) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1350) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1346) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1344)  at org.apache.hadoop.ipc.Client.call(Client.java:905) at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198) at $Proxy0.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at $Proxy0.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:928) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:811) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) 

namenode log just gives the same error. Others don't seem to have anything interesting

Any ideas?

Cheers

like image 405
Steve Avatar asked Mar 14 '11 00:03

Steve


2 Answers

WARNING: The following will destroy ALL data on HDFS. Do not execute the steps in this answer unless you do not care about destroying existing data!!

You should do this:

  1. stop all hadoop services
  2. delete dfs/name and dfs/data directories
  3. hdfs namenode -format Answer with a capital Y
  4. start hadoop services

Also, check the diskspace in your system and make sure the logs are not warning you about it.

like image 131
buzypi Avatar answered Oct 19 '22 15:10

buzypi


This is your issue - the client can't communicate with the Datanode. Because the IP that the client received for the Datanode is an internal IP and not the public IP. Take a look at this

http://www.hadoopinrealworld.com/could-only-be-replicated-to-0-nodes/

Look at the sourcecode from DFSClient$DFSOutputStrem (Hadoop 1.2.1)

// // Connect to first DataNode in the list. // success = createBlockOutputStream(nodes, clientName, false);  if (!success) {   LOG.info("Abandoning " + block);   namenode.abandonBlock(block, src, clientName);    if (errorIndex < nodes.length) {     LOG.info("Excluding datanode " + nodes[errorIndex]);     excludedNodes.add(nodes[errorIndex]);   }    // Connection failed. Let's wait a little bit and retry   retry = true; } 

The key to understand here is that Namenode only provide the list of Datanodes to store the blocks. Namenode does not write the data to the Datanodes. It is the job of the Client to write the data to the Datanodes using the DFSOutputStream . Before any write can begin the above code make sure that the Client can communicate with the Datanode(s) and if the communication fails to the Datanode, the Datanode is added to the excludedNodes .

like image 36
Jerry Ragland Avatar answered Oct 19 '22 16:10

Jerry Ragland