Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between two DataFrames columns in pyspark

I am looking for a way to find difference in values, in columns of two DataFrame. For example:

from pyspark.sql import SQLContext

sc = SparkContext()
sql_context = SQLContext(sc)

df_a = sql_context.createDataFrame([("a", 3), ("b", 5), ("c", 7)], ["name", "id"])

df_b = sql_context.createDataFrame([("a", 3), ("b", 10), ("c", 13)], ["name", "id"])

DataFrame A:

+----+---+
|name| id|
+----+---+
|   a|  3|
|   b|  5|
|   c|  7|
+----+---+

DataFrame B:

+----+---+
|name| id|
+----+---+
|   a|  3|
|   b| 10|
|   c| 13|
+----+---+

My goal is a list of id column elements that are in A but not in B, e.g: [5, 7]. I was thinking of doing a join on id, but I don't see a good way to do it.

Naive solution could be:

list_a = df_a.select("id").rdd.map(lambda x: x.asDict()["id"]).collect()
list_b = df_b.select("id").rdd.map(lambda x: x.asDict()["id"]).collect()

result = list(set(list_a).difference(list_b))

But, is there a simple solution that can be obtained with just DataFrame operations, except perhaps the final collect?

like image 526
Akavall Avatar asked Jul 13 '16 18:07

Akavall


1 Answers

Use the subtract function

df_a.select('id').subtract(df_b.select('id')).collect()
like image 179
David Avatar answered Oct 13 '22 16:10

David