When using EMR (with Spark, Zeppelin), changing spark.driver.memory
in Zeppelin Spark interpreter settings won't work.
I wonder what is the best and quickest way to set Spark driver memory when using EMR web interface (not aws CLI) to create clusters?
Is Bootstrap action could be a solution? If yes, can you please provide an example of how the bootstrap action file should look like?
You can always try to add the following configuration on job flow/cluster creation :
[
{
"Classification": "spark-defaults",
"Properties": {
"spark.driver.memory": "12G"
}
}
]
You can do this most of the configurations whether for spark-default
, hadoop core-site
, etc.
I hope this helps !
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With