Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark Shuffle - How workers know where to pull data from

Tags:

apache-spark

I am trying to understand how does Spark shuffle dependencies under the hood. Thus I have two questions:

  1. In Spark, how does an executor know from what other executors it has to pull data from?

    • Does each executor, after finishing its map side task, update its status and location to some central entity ( may be driver) and reduce side executor first contact driver to get location of each executor to pull from and then pull from those executors directly?
  2. In a job with shuffle dependency, does driver schedule joins (or other tasks on shuffle dependency) only after all map side tasks has finished?

    • Does it mean that each task will notify driver about its status and driver will orchestrate other dependent tasks in timely manner.
like image 692
Gopal Avatar asked Feb 08 '17 06:02

Gopal


1 Answers

I will answer your questions in points

1. How does an executor knows from what other executors it has to pull data from? Simply executor doesn't know what other executor do, But Driver know you can think this process as queen and worker the queen push the tasks to the executor and each one finish the task return back by the results.

2. Does each executor, after finishing its map side task, update its status and location to some central entity ( may be driver)

Yes, actually the driver monitor the process but When you create the SparkContext, each worker starts an executor. This is a separate process (JVM), and it loads your jar too. The executors connect back to your driver program. Now the driver can send them commands, like flatMap, map and reduceByKey in your example. When the driver quits, the executors shut down. you can check also look at this answer What is a task in Spark? How does the Spark worker execute the jar file?

3. Reduce side executor first contact driver to get location of each executor to pull from and then pull from those executors directly? Reduce task has the priority to be run on the same machine the data run on so, there will not be any shuffle unless the data is not available and there is no resources.

4. In a job with shuffle dependency, does driver schedule joins (or other tasks on shuffle dependency) only after all map side tasks has finished?

It is configurable you can change it. you can have a look for this link for more information https://0x0fff.com/spark-architecture-shuffle/

5. Does it mean that each task will notify driver about its status and driver will orchestrate other dependent tasks in timely manner?

Each task notifies and sent heartbeats to the driver and spark implement speculative execution technique. So, if any task fail/slow spark will run another one. more details here http://asyncified.io/2016/08/13/leveraging-spark-speculation-to-identify-and-re-schedule-slow-running-tasks/

like image 123
Moustafa Mahmoud Avatar answered Sep 27 '22 20:09

Moustafa Mahmoud