Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Apache spark and python lambda

I have the following code

file = spark.textFile("hdfs://...")
counts = file.flatMap(lambda line: line.split(" ")) \
             .map(lambda word: (word, 1)) \
             .reduceByKey(lambda a, b: a + b)
counts.saveAsTextFile("hdfs://...")

http://spark.apache.org/examples.html i have copied the example from here

I am unable to understand this code especially the keywords

  1. flatmap,
  2. map and
  3. reduceby

can someone please explain in plain english what's going on.

like image 411
jhon.smith Avatar asked Jul 04 '14 13:07

jhon.smith


People also ask

Can we use Spark in lambda?

The Lambda Architecture (LA) enables developers to build large-scale, distributed data processing systems in a flexible and extensible manner, being fault-tolerant both against hardware failures and human mistakes.

Can you use Python in Apache Spark?

General-Purpose — One of the main advantages of Spark is how flexible it is, and how many application domains it has. It supports Scala, Python, Java, R, and SQL.

Can we run PySpark in lambda?

With container support, we can run any runtime (within resource limitation) on AWS Lambda. In this blog, we will see how we can run a PySpark application on AWS Lambda.

Is PySpark same as Python and Spark?

PySpark has been released in order to support the collaboration of Apache Spark and Python, it actually is a Python API for Spark. In addition, PySpark, helps you interface with Resilient Distributed Datasets (RDDs) in Apache Spark and Python programming language.


1 Answers

map is the easiest, it essentially says do the given operation on every element of the sequence and return the resulting sequence (very similar to foreach). flatMap is the same thing but instead of returning just one element per element you are allowed to return a sequence (which can be empty). Here's an answer explaining the difference between map and flatMap. Lastly reduceByKey takes an aggregate function (meaning it takes two arguments of the same type and returns that type, should also be commutative and associative otherwise you will get inconsistent results) which is used to aggregate every V for each K in your sequence of (K,V) pairs.

EXAMPLE*:
reduce (lambda a, b: a + b,[1,2,3,4])

This says aggregate the whole list with + so it will do

1 + 2 = 3  
3 + 3 = 6  
6 + 4 = 10  
final result is 10

Reduce by key is the same thing except you do a reduce for each unique key.


So to explain it in your example

file = spark.textFile("hdfs://...") // open text file each element of the RDD is one line of the file
counts = file.flatMap(lambda line: line.split(" ")) //flatMap is needed here to return every word (separated by a space) in the line as an Array
             .map(lambda word: (word, 1)) //map each word to a value of 1 so they can be summed
             .reduceByKey(lambda a, b: a + b) // get an RDD of the count of every unique word by aggregating (adding up) all the 1's you wrote in the last step
counts.saveAsTextFile("hdfs://...") //Save the file onto HDFS

So, why count words this way, the reason is that the MapReduce paradigm of programming is highly parallelizable and thus scales to doing this computation on terabytes or even petabytes of data.


I don't use python much tell me if I made a mistake.

like image 109
aaronman Avatar answered Nov 14 '22 23:11

aaronman