I am using uhopper/hadoop
docker image to create yarn cluster. I have 3 nodes with 64GB RAM per node. I have added configuration. I have given 32GB
to yarn. So total cluster memory is 96GB.
- name: YARN_CONF_yarn_scheduler_minimum___allocation___mb
value: "2048"
- name: YARN_CONF_yarn_scheduler_maximum___allocation___mb
value: "16384"
- name: MAPRED_CONF_mapreduce_framework_name
value: "yarn"
- name: MAPRED_CONF_mapreduce_map_memory_mb
value: "8192"
- name: MAPRED_CONF_mapreduce_reduce_memory_mb
value: "8192"
- name: MAPRED_CONF_mapreduce_map_java_opts
value: "-Xmx8192m"
- name: MAPRED_CONF_mapreduce_reduce_java_opts
value: "-Xmx8192m"
- name: YARN_CONF_yarn_nodemanager_resource_memory___mb
value: "32768"
Max Application Master Resources is 10240 MB. I ran 5 spark jobs with each 3 GB driver memory, 2 jobs never came in RUNNING state due 10240MB. I am unable to fully utilize my hardware.
How I can increase the Max Application Master Resources memory ?
I hope, i found an answer, if you change yarn.scheduler.capacity.maximum-am-resource-percent
then Max Application Master Resources will change. Here's a documentation - Setting Application Limits from docs.hortonworks.com
Let me know if it worked.
To change the Maximum Application Master resources, you have to change the percentage of yarn.scheduler.capacity.maximum-am-resource-percent
, which is by default 0.2 which means 20% of the memory allocated to Yarn.
If I am not wrong, the total memory given to YARN is 10240 MB(10GB), and if the maximum percentage Application master can use is 20% then it makes the memory allocated to AM 2GB.
Now, if you want to allocate more memory to your application-master then simply increase the percentage. But it is recommended that your AM percentage should not be more than 0.5. Hope it makes it clear now.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With