All.
I use hadoop2.6.0.
When I force the hadoop leave the safe mode,using hdfs dfsadmin -safemode leave
, it shows Safe mode is OFF
,but I still can't delete the file in the directory,the result show that:
rm: Cannot delete /mei/app-20151013055617-0001-614d554c-cc04-4800-9be8-7d9b3fd3fcef. Name node is in safe mode.
I try to solve this problem using the way listing in the Internet,it doesn't work...
I use the command 'hdfs dfsadmin -report',it shows:
Safe mode is ON
Configured Capacity: 52710469632 (49.09 GB)
Present Capacity: 213811200 (203.91 MB)
DFS Remaining: 0 (0 B)
DFS Used: 213811200 (203.91 MB)
DFS Used%: 100.00%
Under replicated blocks: 39
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Live datanodes (1):
Name: 127.0.0.1:50010 (bdrhel6)
Hostname: bdrhel6
Decommission Status : Normal
Configured Capacity: 52710469632 (49.09 GB)
DFS Used: 213811200 (203.91 MB)
Non DFS Used: 52496658432 (48.89 GB)
DFS Remaining: 0 (0 B)
DFS Used%: 0.41%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Wed Oct 14 03:30:33 EDT 2015
Does anyone have the same problem?
Any help on this please.
Name node is in safe mode. It was turned on manually. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.
If one of the blocks in the file is only replicated once in the cluster then the minimum replication factor for this file is not met. This also means the file is not in good health. Namenode will stay in safemode if the minimum replication factor is not met for more number of blocks.
Safemode
is an HDFS
state in which the file system is mounted read-only; no replication is performed, nor can files be created or deleted. This is automatically entered as the NameNode
starts, to allow all DataNodes
time to check in with the NameNode and announce which blocks they hold, before the NameNode
determines which blocks are under-replicated, etc. The NameNode
waits until a specific percentage of the blocks are present and accounted-for; this is controlled in the configuration by the dfs.safemode.threshold.pct parameter
. After this threshold
is met, safemode
is automatically exited, and HDFS allows normal operations.
1. Below command forces the NameNode to exit safemode
hdfs dfsadmin -safemode leave
2. Run hdfs fsck -move or hdfs fsck -delete to move or delete corrupted files.
Based on the report, It seems that Resource are low on NN. Add or free up more resources then turn off safe mode
manually. If you turn off safe mode before adding more resources or freeing up resource, the NameNode will immediately return to safe mode
.
Reference:
Hadoop Tutorial-YDN
fsck
I faced the same problem. It was occurring because there was no disk space for hadoop to run new commands to manipulate the files. Since hadoop was in safemode, I could not even delete files inside hadoop. I am using cloudera version of hadoop so I first deleted few files in cloudera file system. This freed up some space. Then I executed following command:
[cloudera@quickstart ~]$ hdfs dfsadmin -safemode leave | hadoop fs -rm -r <file on hdfs to be deleted>
This worked for me! HTH
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With