Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How Spark works internally

Tags:

apache-spark

I know that Spark can be operated using Scala, Python and Java. Also, that RDDs are used to store data.

But please explain, what's the architecture of Spark and how does it work internally.

like image 771
Monika Bhajipale Avatar asked Jun 07 '15 08:06

Monika Bhajipale


People also ask

What is Spark and how it works?

Spark is a general-purpose distributed data processing engine that is suitable for use in a wide range of circumstances. On top of the Spark core data processing engine, there are libraries for SQL, machine learning, graph computation, and stream processing, which can be used together in an application.

How does the Spark cluster work internally?

Spark translates the RDD transformations into something called DAG (Directed Acyclic Graph) and starts the execution, At high level, when any action is called on the RDD, Spark creates the DAG and submits to the DAG scheduler. The DAG scheduler divides operators into stages of tasks.

What is Spark internal?

Spark is an open source distributed computing engine. We use it for processing and analyzing a large amount of data. Likewise, hadoop mapreduce, it also works to distribute data across the cluster. It helps to process data in parallel. Spark uses master/slave architecture, one master node, and many slave worker nodes.

What happens internally when you submit a Spark job?

Once you do a Spark submit, a driver program is launched and this requests for resources to the cluster manager and at the same time the main program of the user function of the user processing program is initiated by the driver program.


2 Answers

Spark revolves around the concept of a resilient distributed dataset (RDD), which is a fault-tolerant collection of elements that can be operated on in parallel. RDDs support two types of operations: transformations, which create a new dataset from an existing one, and actions, which return a value to the driver program after running a computation on the dataset.

Spark translates the RDD transformations into something called DAG (Directed Acyclic Graph) and starts the execution,

At high level, when any action is called on the RDD, Spark creates the DAG and submits to the DAG scheduler.

  • The DAG scheduler divides operators into stages of tasks. A stage is comprised of tasks based on partitions of the input data. The DAG scheduler pipelines operators together. E.g. many map operators can be scheduled in a single stage. The final result of a DAG scheduler is a set of stages.

  • The stages are passed on to the Task Scheduler. The task scheduler launches tasks via cluster manager (Spark Standalone/Yarn/Mesos). The task scheduler doesn't know about dependencies of the stages.

  • The Worker/Slave executes the tasks.

Let's come to how Spark builds the DAG.

At high level, there are two transformations that can be applied onto the RDDs, namely narrow transformation and wide transformation. Wide transformations basically result in stage boundaries.

Narrow transformation - doesn't require the data to be shuffled across the partitions. For example, map, filter, etc.

Wide transformation - requires the data to be shuffled, for example, reduceByKey, etc.

Let's take an example of counting how many log messages appear at each level of severity.

Following is the log file that starts with the severity level:

INFO I'm Info message WARN I'm a Warn message INFO I'm another Info message 

and create the following Scala code to extract the same:

val input = sc.textFile("log.txt") val splitedLines = input.map(line => line.split(" "))                         .map(words => (words(0), 1))                         .reduceByKey{(a,b) => a + b} 

This sequence of commands implicitly defines a DAG of RDD objects (RDD lineage) that will be used later when an action is called. Each RDD maintains a pointer to one or more parents along with the metadata about what type of relationship it has with the parent. For example, when we call val b = a.map() on a RDD, the RDD b keeps a reference to its parent a, that's a lineage.

To display the lineage of an RDD, Spark provides a debug method toDebugString() method. For example, executing toDebugString() on splitedLines RDD, will output the following:

(2) ShuffledRDD[6] at reduceByKey at <console>:25 []     +-(2) MapPartitionsRDD[5] at map at <console>:24 []     |  MapPartitionsRDD[4] at map at <console>:23 []     |  log.txt MapPartitionsRDD[1] at textFile at <console>:21 []     |  log.txt HadoopRDD[0] at textFile at <console>:21 [] 

The first line (from bottom) shows the input RDD. We created this RDD by calling sc.textFile(). See below more diagrammatic view of the DAG graph created from the given RDD.

RDD DAG graph

Once the DAG is built, Spark scheduler creates a physical execution plan. As mentioned above, the DAG scheduler splits the graph into multiple stages, the stages are created based on the transformations. The narrow transformations will be grouped (pipe-lined) together into a single stage. So for our example, Spark will create a two-stage execution as follows:

Stages

The DAG scheduler then submits the stages into the task scheduler. The number of tasks submitted depends on the number of partitions present in the textFile. Fox example consider we have 4 partitions in this example, then there will be 4 sets of tasks created and submitted in parallel provided if there are enough slaves/cores. The below diagram illustrates this in bit more detail:

Task execustion

For more detailed information I suggest you to go through the following YouTube videos where the Spark creators give in depth details about the DAG and execution plan and lifetime.

  1. Advanced Apache Spark- Sameer Farooqui (Databricks)
  2. A Deeper Understanding of Spark Internals - Aaron Davidson (Databricks)
  3. Introduction to AmpLab Spark Internals
like image 54
Sathish Avatar answered Oct 12 '22 02:10

Sathish


Here are some JARGONS from Apache Spark i will be using.

Job:- A piece of code which reads some input from HDFS or local, performs some computation on the data and writes some output data.

Stages:-Jobs are divided into stages. Stages are classified as a Map or reduce stages(Its easier to understand if you have worked on Hadoop and want to correlate). Stages are divided based on computational boundaries, all computations(operators) cannot be Updated in a single Stage. It happens over many stages.

Tasks:- Each stage has some tasks, one task per partition. One task is executed on one partition of data on one executor(machine).

DAG:- DAG stands for Directed Acyclic Graph, in the present context its a DAG of operators.

Executor:- The process responsible for executing a task.

Driver:- The program/process responsible for running the Job over the Spark Engine

Master:- The machine on which the Driver program runs

Slave:- The machine on which the Executor program runs

All jobs in spark comprise a series of operators and run on a set of data. All the operators in a job are used to construct a DAG (Directed Acyclic Graph). The DAG is optimized by rearranging and combining operators where possible. For instance let’s assume that you have to submit a Spark job which contains a map operation followed by a filter operation. Spark DAG optimizer would rearrange the order of these operators, as filtering would reduce the number of records to undergo map operation.

Spark has a small code base and the system is divided in various layers. Each layer has some responsibilities. The layers are independent of each other.

enter image description here

  1. The first layer is the interpreter, Spark uses a Scala interpreter, with some modifications.
  2. As you enter your code in spark console(creating RDD's and applying operators), Spark creates a operator graph.
  3. When the user runs an action(like collect), the Graph is submitted to a DAG Scheduler. The DAG scheduler divides operator graph into (map and reduce) stages.
  4. A stage is comprised of tasks based on partitions of the input data. The DAG scheduler pipelines operators together to optimize the graph. For e.g. Many map operators can be scheduled in a single stage. This optimization is key to Sparks performance. The final result of a DAG scheduler is a set of stages.
  5. The stages are passed on to the Task Scheduler. The task scheduler launches tasks via cluster manager.( Spark Standalone/Yarn/Mesos). The task scheduler doesn't know about dependencies among stages.
  6. The Worker executes the tasks on the Slave. A new JVM is started per JOB. The worker knows only about the code that is passed to it.

Spark caches the data to be processed, allowing it to me 100 times faster than hadoop. Spark is highly configurable, and is capable of utilizing the existing components already existing in the Hadoop Eco-System. This has allowed spark to grow exponentially, and in a little time many organisations are already using it in production.

like image 37
Arush Kharbanda Avatar answered Oct 12 '22 02:10

Arush Kharbanda