Is it possible to change the value of executor memory at runtime in Spark? The reason I want to do it is that for some map tasks I want the yarn scheduler to put each task on a separate node. By increasing the executor memory to near the total memory of a node, I ensure they are distributed on each node. Later on, I want to run several tasks per node, so I would lower the executor memory for them.
Use the --conf option to increase memory overhead when you run spark-submit. If increasing the memory overhead doesn't solve the problem, then reduce the number of executor cores.
To enlarge the Spark shuffle service memory size, modify SPARK_DAEMON_MEMORY in $SPARK_HOME/conf/spark-env.sh, the default value is 2g, and then restart shuffle to make the change take effect.
The - -driver-memory flag controls the amount of memory to allocate for a driver, which is 1GB by default and should be increased in case you call a collect() or take(N) action on a large RDD inside your application.
No, you can't.
Each executor starts on their own JVM, and you can't change JVM memory at runtime. Please see for reference: Setting JVM heap size at runtime
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With