Name node is in safe mode. It was turned on manually. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.
The single point of failure in Hadoop v1 is NameNode. If NameNode gets fail the whole Hadoop cluster will not work. Actually, there will not any data loss only the cluster work will be shut down, because NameNode is only the point of contact to all DataNodes and if the NameNode fails all communication will stop.
Copy the HDFS Metadata from active name node to standby namenode. Copy the HDFS Metadata from Active name node to Standby Namenode. Once you run this command, you will get the information from which node and location the metadata is copying and whether it is copying successfully or not.
In order to forcefully let the namenode leave safemode, following command should be executed:
bin/hadoop dfsadmin -safemode leave
You are getting Unknown command
error for your command as -safemode
isn't a sub-command for hadoop fs
, but it is of hadoop dfsadmin
.
Also after the above command, I would suggest you to once run hadoop fsck
so that any inconsistencies crept in the hdfs might be sorted out.
Update:
Use hdfs
command instead of hadoop
command for newer distributions. The hadoop
command is being deprecated:
hdfs dfsadmin -safemode leave
hadoop dfsadmin
has been deprecated and so is hadoop fs
command, all hdfs related tasks are being moved to a separate command hdfs
.
try this, it will work
sudo -u hdfs hdfs dfsadmin -safemode leave
The Command did not work for me but the following did
hdfs dfsadmin -safemode leave
I used the hdfs
command instead of the hadoop
command.
Check out http://ask.gopivotal.com/hc/en-us/articles/200933026-HDFS-goes-into-readonly-mode-and-errors-out-with-Name-node-is-in-safe-mode- link too
safe mode on means (HDFS is in READ only mode)
safe mode off means (HDFS is in Writeable and readable mode)
In Hadoop 2.6.0
, we can check the status of name node with help of the below commands:
TO CHECK THE name node status
$ hdfs dfsadmin -safemode get
TO ENTER IN SAFE MODE:
$ hdfs dfsadmin -safemode enter
TO LEAVE SAFE mode
~$ hdfs dfsadmin -safemode leave
If you use Hadoop version 2.6.1 above, while the command works, it complains that its depreciated. I actually could not use the hadoop dfsadmin -safemode leave
because I was running Hadoop in a Docker container and that command magically fails when run in the container, so what I did was this. I checked doc and found dfs.safemode.threshold.pct
in documentation that says
Specifies the percentage of blocks that should satisfy the minimal replication requirement defined by dfs.replication.min. Values less than or equal to 0 mean not to wait for any particular percentage of blocks before exiting safemode. Values greater than 1 will make safe mode permanent.
so I changed the hdfs-site.xml
into the following (In older Hadoop versions, apparently you need to do it in hdfs-default.xml
:
<configuration>
<property>
<name>dfs.safemode.threshold.pct</name>
<value>0</value>
</property>
</configuration>
Try this
sudo -u hdfs hdfs dfsadmin -safemode leave
check status of safemode
sudo -u hdfs hdfs dfsadmin -safemode get
If it is still in safemode ,then one of the reason would be not enough space in your node, you can check your node disk usage using :
df -h
if root partition is full, delete files or add space in your root partition and retry first step.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With