I am trying to understand how does Spark shuffle dependencies under the hood. Thus I have two questions:
In Spark, how does an executor know from what other executors it has to pull data from?
In a job with shuffle dependency, does driver schedule joins (or other tasks on shuffle dependency) only after all map side tasks has finished?
I will answer your questions in points
1. How does an executor knows from what other executors it has to pull data from? Simply executor doesn't know what other executor do, But Driver know you can think this process as queen and worker the queen push the tasks to the executor and each one finish the task return back by the results.
2. Does each executor, after finishing its map side task, update its status and location to some central entity ( may be driver)
Yes, actually the driver monitor the process but When you create the SparkContext, each worker starts an executor. This is a separate process (JVM), and it loads your jar too. The executors connect back to your driver program. Now the driver can send them commands, like flatMap, map and reduceByKey in your example. When the driver quits, the executors shut down. you can check also look at this answer What is a task in Spark? How does the Spark worker execute the jar file?
3. Reduce side executor first contact driver to get location of each executor to pull from and then pull from those executors directly? Reduce task has the priority to be run on the same machine the data run on so, there will not be any shuffle unless the data is not available and there is no resources.
4. In a job with shuffle dependency, does driver schedule joins (or other tasks on shuffle dependency) only after all map side tasks has finished?
It is configurable you can change it. you can have a look for this link for more information https://0x0fff.com/spark-architecture-shuffle/
5. Does it mean that each task will notify driver about its status and driver will orchestrate other dependent tasks in timely manner?
Each task notifies and sent heartbeats to the driver and spark implement speculative execution technique. So, if any task fail/slow spark will run another one. more details here http://asyncified.io/2016/08/13/leveraging-spark-speculation-to-identify-and-re-schedule-slow-running-tasks/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With