Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Network bandwidth bottleneck for sorting of mapreduce intermediate keys?

I have been learning the mapreduce algorithm and how it can potentially scale to millions of machines, but I don't understand how the sorting of the intermediate keys after the map phase can scale, as there will be:

1,000,000 x 1,000,000

: potential machines communicating small key / value pairs of the intermediate results with each other? Isn't this a bottleneck?

like image 653
yazz.com Avatar asked Mar 11 '10 08:03

yazz.com


1 Answers

Its true that one of the bottlenecks in Hadoop MapReduce is network bandwidth between machines on the cluster. However, the outputs from each map phase are not sent to every machine in the cluster.

The number of map and reduce functions are defined by the job you are running. Each map processes its input data, sorts it to group the keys and writes it to disk. The job defines how many reduce functions you wish to apply to the output from the maps.

Each reduce needs to see all the data for a given key. So if you had a single reduce running for the job all the outputs from each map would need to be sent to the node in the cluster that is running that reduce. Before the reduce runs the data from each map is merged to group all the keys.

If multiple reducers are used, the maps partition their output, creating one per reduce. The partitions are sent to the correct reduce. This ensures that all the data for a given key is being processed by a single reduce.

To help reduce the amount of data needed to be sent over the network you can apply a combine function to the output of a map. This has the effect of running a reduce on the output from the map. Thus you can minimize the amount of data that needs to be transfered to the reducers and speed up the execution time of the overall job.

like image 150
Binary Nerd Avatar answered Oct 07 '22 00:10

Binary Nerd