There are several similar-yet-different concepts in Spark-land surrounding how work gets farmed out to different nodes and executed concurrently. Specifically, there is:
sparkDriverCount
)numWorkerNodes
)numExecutors
)dataFrame
)dataFrame
(numDFRows
)dataFrame
(numPartitions
)numCpuCoresPerWorker
)I believe that all Spark clusters have one-and-only-one Spark Driver, and then 0+ worker nodes. If I'm wrong about that, please begin by correcting me! Assuming I'm more or less correct about that, let's lock in a few variables here. Let's say we have a Spark cluster with 1 Driver and 4 Worker nodes, and each Worker Node has 4 CPU cores on it (so a total of 16 CPU cores). So the "given" here is:
sparkDriverCount = 1
numWorkerNodes = 4
numCpuCores = numWorkerNodes * numCpuCoresPerWorker = 4 * 4 = 16
Given that as the setup, I'm wondering how to determine a few things. Specifically:
numWorkerNodes
and numExecutors
? Is there some known/generally-accepted ratio of workers to executors? Is there a way to determine numExecutors
given numWorkerNodes
(or any other inputs)?numDFRows
to numPartitions
? How does one calculate the 'optimal' number of partitions based on the size of the dataFrame
?numPartitions = numWorkerNodes * numCpuCoresPerWorker
, any truth to that? In other words, it prescribes that one should have 1 partition per CPU core.The best way to decide on the number of partitions in an RDD is to make the number of partitions equal to the number of cores in the cluster so that all the partitions will process in parallel and the resources will be utilized in an optimal way.
Apache Spark can only run a single concurrent task for every partition of an RDD, up to the number of cores in your cluster (and probably 2-3x times that). Hence as far as choosing a "good" number of partitions, you generally want at least as many as the number of executors for parallelism.
The ideal size of each partition is around 100-200 MB. The smaller size of partitions will increase the parallel running jobs, which can improve performance, but too small of a partition will cause overhead and increasing the GC time.
In a Spark RDD, a number of partitions can always be monitor by using the partitions method of RDD. The spark partitioning method will show an output of 6 partitions, for the RDD that we created.
Yes, a spark application has one and only Driver.
What is the relationship between
numWorkerNodes
andnumExecutors
?
A worker can host multiple executors, you can think of it like the worker to be the machine/node of your cluster and the executor to be a process (executing in a core) that runs on that worker.
So `numWorkerNodes <= numExecutors'.
Is there any ration for them?
Personally, having worked in a fake cluster, where my laptop was the Driver and a virtual machine in the very same laptop was the worker, and in an industrial cluster of >10k nodes, I didn't need to care about that, since it seems that spark takes care of that.
I just use:
--num-executors 64
when I launch/submit my script and spark knows, I guess, how many workers it needs to summon (of course, by taking into account other parameters as well, and the nature of the machines).
Thus, personally, I don't know any such ratio.
Is there a known/generally-accepted/optimal ratio of
numDFRows
tonumPartitions
?
I am not aware of one, but as a rule of thumb you could rely on the product of #executors by #executor.cores, and then multiply that by 3 or 4. Of course this is a heuristic. In pyspark it would look like this:
sc = SparkContext(appName = "smeeb-App") total_cores = int(sc._conf.get('spark.executor.instances')) * int(sc._conf.get('spark.executor.cores')) dataset = sc.textFile(input_path, total_cores * 3)
How does one calculate the 'optimal' number of partitions based on the size of the
DataFrame
?
That's a great question. Of course its hard to answer and it depends on your data, cluster, etc., but as discussed here with myself.
Too few partitions and you will have enormous chunks of data, especially when you are dealing with bigdata, thus putting your application in memory stress.
Too many partitions and you will have your hdfs taking much pressure, since all the metadata that has to be generated from the hdfs increases significantly as the number of partitions increase (since it maintains temp files, etc.). *
So what you want is too find a sweet spot for the number of partitions, which is one of the parts of fine tuning your application. :)
'rule of thumb' is:
numPartitions = numWorkerNodes * numCpuCoresPerWorker
, is it true?
Ah, I was writing the heuristic above before seeing this. So this is already answered, but take into account the difference of a worker and an executor.
* I just failed for this today: Prepare my bigdata with Spark via Python, when using too many partitions caused Active tasks is a negative number in Spark UI.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With