Using spark-1.6.0-bin-hadoop2.6 According to http://spark.apache.org/docs/latest/configuration.html
I can set the heap size with spark.executor.memory which is --executor-memory from spark-submit
When running my job the executor memory doesn't exceed the allocated memory yet I receive the error:
java.lang.OutOfMemoryError: Java heap space
I am submitting my job with:
./bin/spark-submit \
--class edu.gatech.cse8803.main.Main \
--master spark://ec2-52-23-155-99.compute-1.amazonaws.com:6066 \
--deploy-mode cluster \
--executor-memory 27G \
--total-executor-cores 100 \
/root/final_project/phenotyping_w_anchors_161-assembly-1.0.jar \
1000
I am using 2 m4.2xlarge instances (32.0 GB, 8 cores)
The issue was there was not enough memory being allocated to the driver. By default it was being allocated 1024.0 MB
I specified 3GB (probably too much) by adding
--driver-memory 3g
Example
./bin/spark-submit \
--class edu.gatech.cse8803.main.Main \
--master spark://ec2-52-23-155-99.compute-1.amazonaws.com:6066 \
--deploy-mode cluster \
--executor-memory 27G \
--driver-memory 3g \
/root/final_project/phenotyping_w_anchors_161-assembly-1.0.jar \
1000
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With