Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does reducing the number of executor-cores consume less executor-memory?

My Spark job failed with the YARN error Container killed by YARN for exceeding memory limits 10.0 GB of 10 GB physical memory used.

Intuitively, I decreased the number of cores from 5 to 1 and the job ran successfully.

I did not increase the executor-memory because 10g was the max for my YARN cluster.

I just wanted to confirm if my intuition. Does reducing the number of executor-cores consume less executor-memory? If so, why?

like image 654
Glide Avatar asked Apr 29 '19 17:04

Glide


1 Answers

spark.executor.cores = 5, spark.executor.memory=10G

This means an executor can run 5 tasks in parallel. That means 10 GB needs to be shared by 5 tasks.So effectively on an average - each task will have 2 GB available. If all the tasks consumes more than 2 GB, than overall JVM will end up consuming more than 10 GB and so YARN will kill the container.

spark.executor.cores = 1, spark.executor.memory=10G

This means an executor can run only 1 task. That means 10 GB is available to 1 task completely. So if the task uses more than 2 GB but less than 10 GB, it will work fine. That was the case in your Job and so it worked.

like image 83
moriarty007 Avatar answered Oct 27 '22 01:10

moriarty007