I have 3 data nodes running, while running a job i am getting the following given below error ,
java.io.IOException: File /user/ashsshar/olhcache/loaderMap9b663bd9 could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and 3 node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1325)
This error mainly comes when our DataNode instances have ran out of space or if DataNodes are not running. I tried restarting the DataNodes but still getting the same error.
dfsadmin -reports at my cluster nodes clearly shows a lots of space is available.
I am not sure why this is happending.
When a file is written to HDFS, it is replicated to multiple core nodes. When you see this error, it means that the NameNode daemon does not have any available DataNode instances to write data to in HDFS. In other words, block replication is not taking place.
Datanode daemon should be started manually using $HADOOP_HOME/bin/hadoop-daemon.sh script. Master (NameNode) should correspondingly join the cluster after automatically contacted. New node should be added to the configuration/slaves file in the master server. New node will be identified by script-based commands.
1.Stop all Hadoop daemons
for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x stop ; done
2.Remove all files from /var/lib/hadoop-hdfs/cache/hdfs/dfs/name
Eg: devan@Devan-PC:~$ sudo rm -r /var/lib/hadoop-hdfs/cache/
3.Format Namenode
sudo -u hdfs hdfs namenode -format
4.Start all Hadoop daemons
for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x start ; done
Stop All Hadoop Service
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With