Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Join in spark dataframe (scala) based on not null values

Suppose I've two dataframes like following :

First -

A    | B    | C    | D
1a   | 1b   | 1c   | 1d
2a   | null | 2c   | 2d
3a   | null | null | 3d
4a   | 4b   | null | null
5a   | null | null | null
6a   | 6b   | 6c   | null

Second -

P    | B    | C    | D
1p   | 1b   | 1c   | 1d
2p   | 2b   | 2c   | 2d
3p   | 3b   | 3c   | 3d
4p   | 4b   | 4c   | 4d 
5p   | 5b   | 5c   | 5d
6p   | 6b   | 6c   | 6d 

The join operation is performed based on {"B", "C", "D"}. In case of null occurring in any of these columns, it should check for not null values occurring in remaining columns.

So, the result should be like -

P    | B    | C    | D    | A
1p   | 1b   | 1c   | 1d   | 1a
2p   | null | 2c   | 2d   | 2a
3p   | null | null | 3d   | 3a
4p   | 4b   | null | null | 4a // First(C) & First(D) was null so we take only B
6p   | 6b   | 6c   | null | 6a

Can anyone suggest any solution for this query ? Currently I am trying to filter values having null values in single column, two columns, three columns. Then joining them with Second without taking that column. For eg - I first filtered out values having only B as null from First. Then joining it with Second based on "C" and "D". In this way, I will get many dataframes and I will finally union them.

like image 432
Ishan Avatar asked Oct 20 '25 06:10

Ishan


1 Answers

Here's what you can do

import org.apache.spark.sql.functions._
df1.join(broadcast(df2), df1("B") === df2("B") || df1("C") === df2("C") || df1("D") === df2("D"))
  .drop(df2("B"))
  .drop(df2("C"))
  .drop(df2("D"))
  .show(false)

to be more safe you can broadcast the dataframe which is smaller in size.

like image 141
Ramesh Maharjan Avatar answered Oct 21 '25 20:10

Ramesh Maharjan



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!