Irrespective of the spark executor core count, yarn container for the executor does not use more than 1 core.
YARN is showing 1 core per executor irrespective of spark.executor.cores
because by default DefaultResourceCalculator is used. It considers only memory.
public int computeAvailableContainers(Resource available, Resource required) {
// Only consider memory
return available.getMemory() / required.getMemory();
}
Use DominantResourceCalculator, It uses both cpu and memory.
Set below config in capacity-scheduler.xml
yarn.scheduler.capacity.resource-calculator=org.apache.hadoop.yarn.util.resource.DominantResourceCalculator
More about DominantResourceCalculator
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With