Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Hadoop: Number of mappers and reducers

I ran Hadoop MapReduce on 1.1GB file multiple times with a different number of mappers and reducers (e.g. 1 mapper and 1 reducer, 1 mapper and 2 reducers, 1 mapper and 4 reducers, ...)

Hadoop is installed on quad-core machine with hyper-threading.

The following is the top 5 result sorted by shortest execution time:

+----------+----------+----------+
|  time    | # of map | # of red |
+----------+----------+----------+
| 7m 50s   |    8     |    2     |
| 8m 13s   |    8     |    4     |
| 8m 16s   |    8     |    8     |
| 8m 28s   |    4     |    8     |
| 8m 37s   |    4     |    4     |
+----------+----------+----------+

Edit

The result for 1 - 8 reducers and 1 - 8 mappers: column = # of mappers row = # of reducers

+---------+---------+---------+---------+---------+
|         |    1    |    2    |    4    |    8    |
+---------+---------+---------+---------+---------+
|    1    |  16:23  |  13:17  |  11:27  |  10:19  |
+---------+---------+---------+---------+---------+
|    2    |  13:56  |  10:24  |  08:41  |  07:52  |
+---------+---------+---------+---------+---------+
|    4    |  14:12  |  10:21  |  08:37  |  08:13  |  
+---------+---------+---------+---------+---------+
|    8    |  14:09  |  09:46  |  08:28  |  08:16  |
+---------+---------+---------+---------+---------+

(1) It looks that the program runs slightly faster when I have 8 mappers, but why does it slow down as I increase the number of reducers? (e.g. 8mappers/2reducers is faster than 8mappers/8reducers)

(2) When I use only 4 mappers, it's a bit slower simply because I'm not utilizing the other 4 cores, right?

like image 917
kabichan Avatar asked Dec 01 '13 00:12

kabichan


People also ask

How do you determine the number of mappers and reducers?

The number of mapper depends on the total size of the input. i.e. the total number of blocks of the input files.

How many mappers are there in Hadoop?

Hadoop runs 2 mappers and 2 reducers (by default) in a data node, the number of mappers can be changed in the mapreduce.

Do we need more reducers than mappers?

Suppose your data size is small, then you don't need so many mappers running to process the input files in parallel. However, if the <key,value> pairs generated by the mappers are large & diverse, then it makes sense to have more reducers because you can process more number of <key,value> pairs in parallel.

What is the right number of reducers in Hadoop?

Follow the link to learn more about Reducer in Hadoop The right number of Reducer seems to be 0.95 or 1.75 multiplied by (<no. of nodes> * <no. of maximum containers per node>). With 0.95 all of the reduces can launch immediately and start transferring map outputs as the maps finish.


1 Answers

The optimal number of mappers and reducers has to do with a lot of things.

The main thing to aim for is the balance between the used CPU power, amount of data that is transported (in mapper, between mapper and reducer, and out the reducers) and the disk 'head movements'.

Each task in a mapreduce job works best if it can read/write the data 'with minimal disk head movements'. Usually described as "sequential reads/writes". But if the task is CPU bound the extra diskhead movements do not impact the job.

It seems to me that in this specific case you have

  • a mapper that does quite a bit of CPU cycles (i.e. more mappers make it go faster because the CPU is the bottle neck and the disks can keep up in providing the input data).
  • a reducer that does almost no CPU cycles and is mostly IO bound. This causes that with a single reducer you are still CPU bound, yet with 4 or more reducers you seem to be IO bound. So 4 reducers cause the disk head to move 'too much'.

Possible ways to handle this kind of situation:

First do exactly what you did: Do some test runs and see which setting performs best given this specific job and your specific cluster.

Then you have three options:

  • Accept the situation you have
  • Shift load from CPU to disk or the other way around.
  • Get a bigger cluster: More CPUs and/or more disks.

Suggestions for shifting the load:

  • If CPU bound and all CPUs are fully loaded then reduce the CPU load:

    • Check for needless CPU cycles in your code.
    • Switch to a 'lower CPU impact' compression codec: I.e. go from GZip to Snappy or to 'no compression'.
    • Tune the number of mappers/reducers in your job.
  • If IO bound and you have some CPU capacity left:

    • Enable compression: This makes the CPUs work a bit harder and reduces the work the disks have to do.
    • Experiment with various compression codecs (I recommend sticking with either Snappy or Gzip ... I very often go with Gzip).
    • Tune the number of mappers/reducers in your job.
like image 69
Niels Basjes Avatar answered Oct 04 '22 02:10

Niels Basjes