Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark performance tuning - number of executors vs number for cores

I have two questions around performance tuning in Spark:

  1. I understand one of the key things for controlling parallelism in the spark job is the number of partitions that exist in the RDD that is being processed, and then controlling the executors and cores processing these partitions. Can I assume this to be true:

    • # of executors * # of executor cores shoud be <= # of partitions. i.e to say one partition is always processed in one core of one executor. There is no point having more executors*cores than the number of partitions
  2. I understand that having a high number of cores per executor can have a -ve impact on things like HDFS writes, but here's my second question, purely from a data processing point of view what is the difference between the two? For e.g. if I have 10 node cluster what would be the difference between these two jobs (assuming there's ample memory per node to process everything):

    1. 5 executors * 2 executor cores

    2. 2 executors * 5 executor cores

    Assuming there's infinite memory and CPU, from a performance point of view should we expect the above two to perform the same?

like image 824
Shay Avatar asked Aug 17 '16 20:08

Shay


People also ask

How many executors should I have Spark?

Number of available executors = (total cores/num-cores-per-executor) = 150/5 = 30.

How many cores does executor Spark have?

The consensus in most Spark tuning guides is that 5 cores per executor is the optimum number of cores in terms of parallel processing.

Why are there 5 cores of an executor?

The cores property controls the number of concurrent tasks an executor can run. - -executor-cores 5 means that each executor can run a maximum of five tasks at the same time.

What is the recommended RAM size of each executor in Spark?

Memory for each executor: So memory for each executor in each node is 63/3 = 21GB.


1 Answers

Most of the time using larger executors (more memory, more cores) are better. One: larger executor with large memory can easily support broadcast joins and do away with shuffle. Second: since tasks are not created equal, statistically larger executors have better chance of surviving OOM issues. The only problem with large executors is GC pauses. G1GC helps.

like image 182
Rohit Karlupia Avatar answered Sep 30 '22 15:09

Rohit Karlupia