Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Hadoop namenode : Single point of failure

Tags:

The Namenode in the Hadoop architecture is a single point of failure.

How do people who have large Hadoop clusters cope with this problem?.

Is there an industry-accepted solution that has worked well wherein a secondary Namenode takes over in case the primary one fails ?

like image 416
rakeshr Avatar asked Dec 21 '10 17:12

rakeshr


People also ask

Is NameNode single point of failure?

The Namenode in the Hadoop architecture is a single point of failure.

What happens if NameNode fails in Hadoop?

If NameNode fails, the entire Hadoop cluster will fail. Actually, there will be no data loss, only the cluster job will be shut down because NameNode is just the point of contact for all DataNodes and if the NameNode fails then all communication will stop.

What is the reason for a single point of failure in Hadoop v1?

The single point of failure in Hadoop v1 is NameNode. If NameNode gets fail the whole Hadoop cluster will not work. Actually, there will not any data loss only the cluster work will be shut down, because NameNode is only the point of contact to all DataNodes and if the NameNode fails all communication will stop.

What is single point of failure in big data?

A SPOF or single point of failure is any non-redundant part of a system that, if dysfunctional, would cause the entire system to fail. A single point of failure is antithetical to the goal of high availability in a computing system or network, a software application, a business practice, or any other industrial system.


2 Answers

Yahoo has certain recommendations for configuration settings at different cluster sizes to take NameNode failure into account. For example:

The single point of failure in a Hadoop cluster is the NameNode. While the loss of any other machine (intermittently or permanently) does not result in data loss, NameNode loss results in cluster unavailability. The permanent loss of NameNode data would render the cluster's HDFS inoperable.

Therefore, another step should be taken in this configuration to back up the NameNode metadata

Facebook uses a tweaked version of Hadoop for its data warehouses; it has some optimizations that focus on NameNode reliability. Additionally to the patches available on github, Facebook appears to use AvatarNode specifically for quickly switching between primary and secondary NameNodes. Dhruba Borthakur's blog contains several other entries offering further insights into the NameNode as a single point of failure.

Edit: Further info about Facebook's improvements to the NameNode.

like image 92
Bkkbrad Avatar answered Sep 21 '22 09:09

Bkkbrad


High Availability of Namenode has been introduced with Hadoop 2.x release.

It can be achieved in two modes - With NFS and With QJM

But high availability with Quorum Journal Manager (QJM) is preferred option.

In a typical HA cluster, two separate machines are configured as NameNodes. At any point in time, exactly one of the NameNodes is in an Active state, and the other is in a Standby state. The Active NameNode is responsible for all client operations in the cluster, while the Standby is simply acting as a slave, maintaining enough state to provide a fast failover if necessary.

Have a look at below SE questions, which explains complete failover process.

Secondary NameNode usage and High availability in Hadoop 2.x

How does Hadoop Namenode failover process works?

like image 28
Ravindra babu Avatar answered Sep 19 '22 09:09

Ravindra babu