Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

reduceByKey: How does it work internally?

I am new to Spark and Scala. I was confused about the way reduceByKey function works in Spark. Suppose we have the following code:

val lines = sc.textFile("data.txt") val pairs = lines.map(s => (s, 1)) val counts = pairs.reduceByKey((a, b) => a + b) 

The map function is clear: s is the key and it points to the line from data.txt and 1 is the value.

However, I didn't get how the reduceByKey works internally? Does "a" points to the key? Alternatively, does "a" point to "s"? Then what does represent a + b? how are they filled?

like image 613
user764186 Avatar asked May 09 '15 21:05

user764186


People also ask

How does reduceByKey work in Spark?

Spark reduceByKey Function In Spark, the reduceByKey function is a frequently used transformation operation that performs aggregation of data. It receives key-value pairs (K, V) as an input, aggregates the values based on the key and generates a dataset of (K, V) pairs as an output.

Is reduceByKey a transformation or action?

Spark RDD reduceByKey is a transformation function which merges the values for each key using an associative reduce function.

What does reduceByKey do in PySpark?

PySpark reduceByKey() transformation is used to merge the values of each key using an associative reduce function on PySpark RDD. It is a wider transformation as it shuffles data across multiple partitions and It operates on pair RDD (key/value pair).

What would be the output of RDD reduceByKey?

RDD reduceByKey() Example In this example, reduceByKey() is used to reduces the word string by applying the + operator on value. The result of our RDD contains unique words and their count.


2 Answers

Let's break it down to discrete methods and types. That usually exposes the intricacies for new devs:

pairs.reduceByKey((a, b) => a + b) 

becomes

pairs.reduceByKey((a: Int, b: Int) => a + b) 

and renaming the variables makes it a little more explicit

pairs.reduceByKey((accumulatedValue: Int, currentValue: Int) => accumulatedValue + currentValue) 

So, we can now see that we are simply taking an accumulated value for the given key and summing it with the next value of that key. NOW, let's break it further so we can understand the key part. So, let's visualize the method more like this:

pairs.reduce((accumulatedValue: List[(String, Int)], currentValue: (String, Int)) => {   //Turn the accumulated value into a true key->value mapping   val accumAsMap = accumulatedValue.toMap      //Try to get the key's current value if we've already encountered it   accumAsMap.get(currentValue._1) match {      //If we have encountered it, then add the new value to the existing value and overwrite the old     case Some(value : Int) => (accumAsMap + (currentValue._1 -> (value + currentValue._2))).toList     //If we have NOT encountered it, then simply add it to the list     case None => currentValue :: accumulatedValue    } }) 

So, you can see that the reduceByKey takes the boilerplate of finding the key and tracking it so that you don't have to worry about managing that part.

Deeper, truer if you want

All that being said, that is a simplified version of what happens as there are some optimizations that are done here. This operation is associative, so the spark engine will perform these reductions locally first (often termed map-side reduce) and then once again at the driver. This saves network traffic; instead of sending all the data and performing the operation, it can reduce it as small as it can and then send that reduction over the wire.

like image 80
Justin Pihony Avatar answered Sep 21 '22 10:09

Justin Pihony


One requirement for the reduceByKey function is that is must be associative. To build some intuition on how reduceByKey works, let's first see how an associative associative function helps us in a parallel computation:

associative function in action

As we can see, we can break an original collection in pieces and by applying the associative function, we can accumulate a total. The sequential case is trivial, we are used to it: 1+2+3+4+5+6+7+8+9+10.

Associativity lets us use that same function in sequence and in parallel. reduceByKey uses that property to compute a result out of an RDD, which is a distributed collection consisting of partitions.

Consider the following example:

// collection of the form ("key",1),("key,2),...,("key",20) split among 4 partitions val rdd =sparkContext.parallelize(( (1 to 20).map(x=>("key",x))), 4) rdd.reduceByKey(_ + _) rdd.collect() > Array[(String, Int)] = Array((key,210)) 

In spark, data is distributed into partitions. For the next illustration, (4) partitions are to the left, enclosed in thin lines. First, we apply the function locally to each partition, sequentially in the partition, but we run all 4 partitions in parallel. Then, the result of each local computation are aggregated by applying the same function again and finally come to a result.

enter image description here

reduceByKey is an specialization of aggregateByKey aggregateByKey takes 2 functions: one that is applied to each partition (sequentially) and one that is applied among the results of each partition (in parallel). reduceByKey uses the same associative function on both cases: to do a sequential computing on each partition and then combine those results in a final result as we have illustrated here.

like image 32
maasg Avatar answered Sep 20 '22 10:09

maasg