Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

PySpark DataFrame - Join on multiple columns dynamically

let's say I have two DataFrames on Spark

firstdf = sqlContext.createDataFrame([{'firstdf-id':1,'firstdf-column1':2,'firstdf-column2':3,'firstdf-column3':4}, \
{'firstdf-id':2,'firstdf-column1':3,'firstdf-column2':4,'firstdf-column3':5}])

seconddf = sqlContext.createDataFrame([{'seconddf-id':1,'seconddf-column1':2,'seconddf-column2':4,'seconddf-column3':5}, \
{'seconddf-id':2,'seconddf-column1':6,'seconddf-column2':7,'seconddf-column3':8}])

Now I want to join them by multiple columns (any number bigger than one)

What I have is an array of columns of the first DataFrame and an array of columns of the second DataFrame, these arrays have the same size, and I want to join by the columns specified in these arrays. For example:

columnsFirstDf = ['firstdf-id', 'firstdf-column1']
columnsSecondDf = ['seconddf-id', 'seconddf-column1']

Since these arrays have variable sizes I can't use this kind of approach:

from pyspark.sql.functions import *

firstdf.join(seconddf, \
    (col(columnsFirstDf[0]) == col(columnsSecondDf[0])) &
    (col(columnsFirstDf[1]) == col(columnsSecondDf[1])), \
    'inner'
)

Is there any way that I can join on multiple columns dynamically?

like image 839
Pedro Bernardo Avatar asked Sep 21 '16 02:09

Pedro Bernardo


2 Answers

Why not use a simple comprehension:

firstdf.join(
    seconddf, 
   [col(f) == col(s) for (f, s) in zip(columnsFirstDf, columnsSecondDf)], 
   "inner"
)

Since you use logical it is enough to provide a list of conditions without & operator.

like image 169
zero323 Avatar answered Nov 11 '22 18:11

zero323


@Mohan sorry i dont have reputation to do "add a comment". Having column same on both dataframe,create list with those columns and use in the join

col_list=["id","column1","column2"]
firstdf.join( seconddf, col_list, "inner")
like image 3
Balaji SS Avatar answered Nov 11 '22 18:11

Balaji SS