Running spark job with 1 TB data with following configuration :
33G executor memory 40 executors 5 cores per executor
17 g memoryoverhead
What are the possible reasons for this Error?
Where did you get that warning from? Which particular logs? Your lucky you even get a warning :). Indeed 17g seems like enough, but then you do have 1TB of data. I've had to use more like 30g for less data than that.
The reason for the error is that yarn uses extra memory for the container that doesn't live in the memory space of the executor. I've noticed that more tasks (partitions) means much more memory used, and shuffles are generally heavier, other than that I've not seen any other correspondences to what I do. Something somehow is eating memory unnecessarily.
It seems the world is moving to Mesos, maybe it doesn't have this problem. Even better, just use Spark stand alone.
More info: http://www.wdong.org/wordpress/blog/2015/01/08/spark-on-yarn-where-have-all-my-memory-gone/. This link seems kinda dead (it's a deep dive into the way YARN gobbles memory). This link may work: http://m.blog.csdn.net/article/details?id=50387104. If not try googling "spark on yarn where have all my memory gone"
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With