Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

The memory consumption of hadoop's namenode?

Can anyone give a detailed analysis of memory consumption of namenode? Or is there some reference material ? Can not find material in the network.Thank you!

like image 320
jun zhou Avatar asked Nov 09 '12 09:11

jun zhou


People also ask

How much memory does a NameNode need?

Here is a rule of thumb – allocate 1,000 MB to the Namenode per million blocks stored in HDFS. So 1,000 MB allocated to Namenode is what is required to manage a cluster with 128 TB of raw disk space. Please note the 1,000 MB is to be used just by the Namenode process for holding the block metadata in memory .

What information does memory of NameNode carry?

HDFS Namenode stores meta-data i.e. number of data blocks, file name, path, Block IDs, Block location, no. of replicas, and also Slave related configuration. This meta-data is available in memory in the master for faster retrieval of data.

Why NameNode should have good amount of RAM?

NameNode is so critical to HDFS and when the NameNode is down, HDFS/Hadoop cluster is inaccessible and considered down. NameNode is a single point of failure in Hadoop cluster. NameNode is usually configured with a lot of memory (RAM). Because the block locations are help in main memory.


3 Answers

I suppose the memory consumption would depend on your HDFS setup, so depending on overall size of the HDFS and is relative to block size. From the Hadoop NameNode wiki:

Use a good server with lots of RAM. The more RAM you have, the bigger the file system, or the smaller the block size.

From https://twiki.opensciencegrid.org/bin/view/Documentation/HadoopUnderstanding:

Namenode: The core metadata server of Hadoop. This is the most critical piece of the system, and there can only be one of these. This stores both the file system image and the file system journal. The namenode keeps all of the filesystem layout information (files, blocks, directories, permissions, etc) and the block locations. The filesystem layout is persisted on disk and the block locations are kept solely in memory. When a client opens a file, the namenode tells the client the locations of all the blocks in the file; the client then no longer needs to communicate with the namenode for data transfer.

the same site recommends the following:

Namenode: We recommend at least 8GB of RAM (minimum is 2GB RAM), preferably 16GB or more. A rough rule of thumb is 1GB per 100TB of raw disk space; the actual requirements is around 1GB per million objects (files, directories, and blocks). The CPU requirements are any modern multi-core server CPU. Typically, the namenode will only use 2-5% of your CPU. As this is a single point of failure, the most important requirement is reliable hardware rather than high performance hardware. We suggest a node with redundant power supplies and at least 2 hard drives.

For a more detailed analysis of memory usage, check this link out: https://issues.apache.org/jira/browse/HADOOP-1687

You also might find this question interesting: Hadoop namenode memory usage

like image 175
Pitt Avatar answered Sep 28 '22 03:09

Pitt


There are several technical limits to the NameNode (NN), and facing any of them will limit your scalability.

  1. Memory. NN consume about 150 bytes per each block. From here you can calculate how much RAM you need for your data. There is good discussion: Namenode file quantity limit.
  2. IO. NN is doing 1 IO for each change to filesystem (like create, delete block etc). So your local IO should allow enough. It is harder to estimate how much you need. Taking into account fact that we are limited in number of blocks by memory you will not claim this limit unless your cluster is very big. If it is - consider SSD.
  3. CPU. Namenode has considerable load keeping track of health of all blocks on all datanodes. Each datanode once a period of time report state of all its block. Again, unless cluster is not too big it should not be a problem.
like image 37
David Gruzman Avatar answered Sep 28 '22 03:09

David Gruzman


Example calculation

200 node cluster
24TB/node
128MB block size
Replication factor = 3

How much space is required?

# blocks = 200*24*2^20/(128*3)
~12Million blocks
~12,000 MB memory.

like image 24
user166555 Avatar answered Sep 28 '22 02:09

user166555