Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I be sure that data is distributed evenly across the hadoop nodes?

Tags:

hadoop

hdfs

If I copy data from local system to HDFS, сan I be sure that it is distributed evenly across the nodes?

PS HDFS guarantee that each block will be stored at 3 different nodes. But does this mean that all blocks of my files will be sorted on same 3 nodes? Or will HDFS select them by random for each new block?

like image 221
yura Avatar asked Feb 21 '11 11:02

yura


3 Answers

If your replication is set to 3, it will be put on 3 separate nodes. The number of nodes it's placed on is controlled by your replication factor. If you want greater distribution then you can increase the replication number by editing the $HADOOP_HOME/conf/hadoop-site.xml and changing the dfs.replication value.

I believe new blocks are placed almost randomly. There is some consideration for distribution across different racks (when hadoop is made aware of racks). There is an example (can't find link) that if you have replication at 3 and 2 racks, 2 blocks will be in one rack and the third block will be placed in the other rack. I would guess that there is no preference shown for what node gets the blocks in the rack.

I haven't seen anything indicating or stating a preference to store blocks of the same file on the same nodes.

If you are looking for ways to force balancing data across nodes (with replication at whatever value) a simple option is $HADOOP_HOME/bin/start-balancer.sh which will run a balancing process to move blocks around the cluster automatically. This and a few other balancing options can be found in at the Hadoop FAQs

Hope that helps.

like image 176
QuinnG Avatar answered Oct 13 '22 18:10

QuinnG


You can open HDFS Web UI on port 50070 of Your namenode. It will show you the information about data nodes. One thing you will see there - used space per node.
If you do not have UI - you can look on the space used in the HDFS directories of the data nodes.
If you have a data skew, you can run rebalancer which will solve it gradually.

like image 3
David Gruzman Avatar answered Oct 13 '22 19:10

David Gruzman


Now with Hadoop-385 patch, we can choose the block placement policy, so as to place all blocks of a file in the same node (and similarly for replicated nodes). Read this blog about this topic - look at the comments section.

like image 2
Mohamed Avatar answered Oct 13 '22 19:10

Mohamed