Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Hadoop namenode can't get out of safemode

Tags:

hadoop

All.

I use hadoop2.6.0.

When I force the hadoop leave the safe mode,using hdfs dfsadmin -safemode leave, it shows Safe mode is OFF,but I still can't delete the file in the directory,the result show that: rm: Cannot delete /mei/app-20151013055617-0001-614d554c-cc04-4800-9be8-7d9b3fd3fcef. Name node is in safe mode. I try to solve this problem using the way listing in the Internet,it doesn't work...

I use the command 'hdfs dfsadmin -report',it shows:

    Safe mode is ON
    Configured Capacity: 52710469632 (49.09 GB)
    Present Capacity: 213811200 (203.91 MB)
    DFS Remaining: 0 (0 B)
    DFS Used: 213811200 (203.91 MB)
    DFS Used%: 100.00%
    Under replicated blocks: 39
    Blocks with corrupt replicas: 0
    Missing blocks: 0

    -------------------------------------------------
    Live datanodes (1):

    Name: 127.0.0.1:50010 (bdrhel6)
    Hostname: bdrhel6
    Decommission Status : Normal
    Configured Capacity: 52710469632 (49.09 GB)
    DFS Used: 213811200 (203.91 MB)
    Non DFS Used: 52496658432 (48.89 GB)
    DFS Remaining: 0 (0 B)
    DFS Used%: 0.41%
    DFS Remaining%: 0.00%
    Configured Cache Capacity: 0 (0 B)
    Cache Used: 0 (0 B)
    Cache Remaining: 0 (0 B)
    Cache Used%: 100.00%
    Cache Remaining%: 0.00%
    Xceivers: 1
    Last contact: Wed Oct 14 03:30:33 EDT 2015

Does anyone have the same problem?

Any help on this please.

like image 665
meizi zhang Avatar asked Oct 14 '15 03:10

meizi zhang


People also ask

How do I get NameNode out of Safemode?

Name node is in safe mode. It was turned on manually. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.

Why is NameNode in Safemode?

If one of the blocks in the file is only replicated once in the cluster then the minimum replication factor for this file is not met. This also means the file is not in good health. Namenode will stay in safemode if the minimum replication factor is not met for more number of blocks.


2 Answers

Safemode is an HDFS state in which the file system is mounted read-only; no replication is performed, nor can files be created or deleted. This is automatically entered as the NameNode starts, to allow all DataNodes time to check in with the NameNode and announce which blocks they hold, before the NameNode determines which blocks are under-replicated, etc. The NameNode waits until a specific percentage of the blocks are present and accounted-for; this is controlled in the configuration by the dfs.safemode.threshold.pct parameter. After this threshold is met, safemode is automatically exited, and HDFS allows normal operations.

1. Below command forces the NameNode to exit safemode

   hdfs dfsadmin -safemode leave

2. Run hdfs fsck -move or hdfs fsck -delete to move or delete corrupted files.

Based on the report, It seems that Resource are low on NN. Add or free up more resources then turn off safe mode manually. If you turn off safe mode before adding more resources or freeing up resource, the NameNode will immediately return to safe mode.

Reference:

  1. Hadoop Tutorial-YDN

  2. fsck

like image 81
Vinkal Avatar answered Sep 21 '22 21:09

Vinkal


I faced the same problem. It was occurring because there was no disk space for hadoop to run new commands to manipulate the files. Since hadoop was in safemode, I could not even delete files inside hadoop. I am using cloudera version of hadoop so I first deleted few files in cloudera file system. This freed up some space. Then I executed following command:

    [cloudera@quickstart ~]$ hdfs dfsadmin -safemode leave | hadoop fs -rm -r <file on hdfs to be deleted>

This worked for me! HTH

like image 45
code_explorer Avatar answered Sep 25 '22 21:09

code_explorer