Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can I change Spark's executor memory at runtime?

Is it possible to change the value of executor memory at runtime in Spark? The reason I want to do it is that for some map tasks I want the yarn scheduler to put each task on a separate node. By increasing the executor memory to near the total memory of a node, I ensure they are distributed on each node. Later on, I want to run several tasks per node, so I would lower the executor memory for them.

like image 295
MetallicPriest Avatar asked Jul 12 '15 08:07

MetallicPriest


People also ask

How do I increase memory in executor Spark?

Use the --conf option to increase memory overhead when you run spark-submit. If increasing the memory overhead doesn't solve the problem, then reduce the number of executor cores.

How do I change the memory on my Spark?

To enlarge the Spark shuffle service memory size, modify SPARK_DAEMON_MEMORY in $SPARK_HOME/conf/spark-env.sh, the default value is 2g, and then restart shuffle to make the change take effect.

When should I increase Spark driver memory?

The - -driver-memory flag controls the amount of memory to allocate for a driver, which is 1GB by default and should be increased in case you call a collect() or take(N) action on a large RDD inside your application.


1 Answers

No, you can't.

Each executor starts on their own JVM, and you can't change JVM memory at runtime. Please see for reference: Setting JVM heap size at runtime

like image 172
red1ynx Avatar answered Sep 19 '22 10:09

red1ynx