Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

spark dataframe drop duplicates and keep first

Question: in pandas when dropping duplicates you can specify which columns to keep. Is there an equivalent in Spark Dataframes?

Pandas:

df.sort_values('actual_datetime', ascending=False).drop_duplicates(subset=['scheduled_datetime', 'flt_flightnumber'], keep='first')

Spark dataframe (I use Spark 1.6.0) doesn't have the keep option

df.orderBy(['actual_datetime']).dropDuplicates(subset=['scheduled_datetime', 'flt_flightnumber'])

Imagine scheduled_datetime and flt_flightnumber are columns 6 ,17. By creating keys based on the values of these columns we can also deduplicate

def get_key(x):
    return "{0}{1}".format(x[6],x[17])

df= df.map(lambda x: (get_key(x),x)).reduceByKey(lambda x,y: (x))

but how to specify to keep the first row and get rid of the other duplicates ? What about the last row ?

like image 517
ad_s Avatar asked Jul 31 '16 18:07

ad_s


People also ask

How do I drop duplicates and keep first pandas?

Use DataFrame. drop_duplicates() to Drop Duplicate and Keep First Rows. You can use DataFrame. drop_duplicates() without any arguments to drop rows with the same values on all columns.

What is difference between distinct and dropDuplicates spark?

Both can be used to eliminate duplicated rows of a Spark DataFrame however, their difference is that distinct() takes no arguments at all, while dropDuplicates() can be given a subset of columns to consider when dropping duplicated records.

How do I remove duplicates in a DataFrame in PySpark?

PySpark distinct() function is used to drop/remove the duplicate rows (all columns) from DataFrame and dropDuplicates() is used to drop rows based on selected (one or multiple) columns.


7 Answers

To everyone saying that dropDuplicates keeps the first occurrence - this is not strictly correct.

dropDuplicates keeps the 'first occurrence' of a sort operation - only if there is 1 partition. See below for some examples.
However this is not practical for most Spark datasets. So I'm also including an example of 'first occurrence' drop duplicates operation using Window function + sort + rank + filter.
See bottom of post for example.

This is tested in Spark 2.4.0 using pyspark.

dropDuplicates examples

import pandas as pd

# generating some example data with pandas, will convert to spark df below
df1 = pd.DataFrame({'col1':range(0,5)})
df1['datestr'] = '2018-01-01'
df2 = pd.DataFrame({'col1':range(0,5)})
df2['datestr'] = '2018-02-01'
df3 = pd.DataFrame({'col1':range(0,5)})
df3['datestr'] = '2018-03-01'
dfall = pd.concat([df1,df2,df3])
print(dfall)
   col1     datestr
0     0  2018-01-01
1     1  2018-01-01
2     2  2018-01-01
3     3  2018-01-01
4     4  2018-01-01
0     0  2018-02-01
1     1  2018-02-01
2     2  2018-02-01
3     3  2018-02-01
4     4  2018-02-01
0     0  2018-03-01
1     1  2018-03-01
2     2  2018-03-01
3     3  2018-03-01
4     4  2018-03-01
# first example
# does not give first (based on datestr)
(spark.createDataFrame(dfall)
   .orderBy('datestr')
   .dropDuplicates(subset = ['col1'])
   .show()
)

# dropDuplicates NOT based on occurrence of sorted datestr
+----+----------+
|col1|   datestr|
+----+----------+
|   0|2018-03-01|
|   1|2018-02-01|
|   3|2018-02-01|
|   2|2018-02-01|
|   4|2018-01-01|
+----+----------+
# second example
# testing what happens with repartition
(spark.createDataFrame(dfall)
   .orderBy('datestr')
   .repartition('datestr')
   .dropDuplicates(subset = ['col1'])
   .show()
)

# dropDuplicates NOT based on occurrence of sorted datestr

+----+----------+
|col1|   datestr|
+----+----------+
|   0|2018-02-01|
|   1|2018-01-01|
|   3|2018-02-01|
|   2|2018-02-01|
|   4|2018-02-01|
+----+----------+
#third example
# testing with coalesce(1)
(spark
   .createDataFrame(dfall)
   .orderBy('datestr')
   .coalesce(1)
   .dropDuplicates(subset = ['col1'])
   .show()
)

# dropDuplicates based on occurrence of sorted datestr
+----+----------+
|col1|   datestr|
+----+----------+
|   0|2018-01-01|
|   1|2018-01-01|
|   2|2018-01-01|
|   3|2018-01-01|
|   4|2018-01-01|
+----+----------+
# fourth example
# testing with reverse sort then coalesce(1)
(spark
   .createDataFrame(dfall)
   .orderBy('datestr', ascending = False)
   .coalesce(1)
   .dropDuplicates(subset = ['col1'])
   .show()
)
# dropDuplicates based on occurrence of sorted datestr```
+----+----------+
|col1|   datestr|
+----+----------+
|   0|2018-03-01|
|   1|2018-03-01|
|   2|2018-03-01|
|   3|2018-03-01|
|   4|2018-03-01|
+----+----------+

window, sort, rank, filter example

# generating some example data with pandas
df1 = pd.DataFrame({'col1':range(0,5)})
df1['datestr'] = '2018-01-01'
df2 = pd.DataFrame({'col1':range(0,5)})
df2['datestr'] = '2018-02-01'
df3 = pd.DataFrame({'col1':range(0,5)})
df3['datestr'] = '2018-03-01'
dfall = pd.concat([df1,df2,df3])
# into spark df
df_s = (spark.createDataFrame(dfall))
from pyspark.sql import Window
from pyspark.sql.functions import rank
window = Window.partitionBy("col1").orderBy("datestr")
(df_s.withColumn('rank', rank().over(window))
.filter(col('rank') == 1)
.drop('rank')
.show()
)
+----+----------+
|col1|   datestr|
+----+----------+
|   0|2018-01-01|
|   1|2018-01-01|
|   3|2018-01-01|
|   2|2018-01-01|
|   4|2018-01-01|
+----+----------+
# however this fails if ties/duplicates exist in the windowing paritions
# and so a tie breaker for the 'rank' function must be added

# generating some example data with pandas, will convert to spark df below
df1 = pd.DataFrame({'col1':range(0,5)})
df1['datestr'] = '2018-01-01'
df2 = pd.DataFrame({'col1':range(0,5)})
df2['datestr'] = '2018-01-01' # note duplicates in this dataset
df3 = pd.DataFrame({'col1':range(0,5)})
df3['datestr'] = '2018-03-01'
dfall = pd.concat([df1,df2,df3])
print(dfall)
   col1     datestr
0     0  2018-01-01
1     1  2018-01-01
2     2  2018-01-01
3     3  2018-01-01
4     4  2018-01-01
0     0  2018-01-01
1     1  2018-01-01
2     2  2018-01-01
3     3  2018-01-01
4     4  2018-01-01
0     0  2018-03-01
1     1  2018-03-01
2     2  2018-03-01
3     3  2018-03-01
4     4  2018-03-01
# this will fail, since duplicates exist within the window partitions
# and no way to specify ranking style exists in pyspark rank() fn
window = Window.partitionBy("col1").orderBy("datestr")
(df_s.withColumn('rank', rank().over(window))
.filter(col('rank') == 1)
.drop('rank')
.show()
)
+----+----------+
|col1|   datestr|
+----+----------+
|   0|2018-01-01|
|   0|2018-01-01|
|   1|2018-01-01|
|   1|2018-01-01|
|   3|2018-01-01|
|   3|2018-01-01|
|   2|2018-01-01|
|   2|2018-01-01|
|   4|2018-01-01|
|   4|2018-01-01|
+----+----------+
# to deal with ties within window partitions, a tiebreaker column is added
from pyspark.sql import Window
from pyspark.sql.functions import rank, col, monotonically_increasing_id
window = Window.partitionBy("col1").orderBy("datestr",'tiebreak')
(df_s
 .withColumn('tiebreak', monotonically_increasing_id())
 .withColumn('rank', rank().over(window))
 .filter(col('rank') == 1).drop('rank','tiebreak')
 .show()
)
+----+----------+
|col1|   datestr|
+----+----------+
|   0|2018-01-01|
|   1|2018-01-01|
|   3|2018-01-01|
|   2|2018-01-01|
|   4|2018-01-01|
+----+----------+
like image 154
timle Avatar answered Oct 05 '22 17:10

timle


I did the following:

dataframe.groupBy("uniqueColumn").min("time")

This will group by the given column, and within the same group choose the one with min time (this will keep the first and remove others)

like image 35
Mahmoud Hanafy Avatar answered Oct 05 '22 16:10

Mahmoud Hanafy


solution 1 add a new column row num(incremental column) and drop duplicates based the min row after grouping on all the columns you are interested in.(you can include all the columns for dropping duplicates except the row num col)

solution 2: turn the data-frame into a rdd (df.rdd) then group the rdd on one or more or all keys and then run a lambda function on the group and drop the rows the way you want and return only the row that you are interested in.

One of my friend (sameer) mentioned that below(old solution) didn't work for him. use dropDuplicates method by default it keeps the first occurance.

like image 45
Veera Marni Avatar answered Oct 05 '22 16:10

Veera Marni


Use window and row_number functions.
Order by ascending or descending to select first or last.

from pyspark.sql import Window
from pyspark.sql import functions as f

window = Window.partitionBy("col1").orderBy("datestr").asc()
df = (df.withColumn('row', f.row_number().over(window))\
.filter(col('row') == 1)
.drop('row')
.show())
like image 30
Alex Avatar answered Oct 05 '22 16:10

Alex


I just did something perhaps similar to what you guys need, using drop_duplicates pyspark.

Situation is this. I have 2 dataframes (coming from 2 files) which are exactly same except 2 columns file_date(file date extracted from the file name) and data_date(row date stamp). Annoyingly I have rows which are with same data_date (and all other column cells too) but different file_date as they get replicated on every newcomming file with an addition of one new row.

I needed to capture all rows from the new file, plus that one row left over from the previous file. That row is not in the new file. Remaining columns on the right from data_date are same between the two files for the same data_date.

file_1_20190122 - df1

+------------+----------+----------+
|station_code| file_date| data_date|
+------------+----------+----------+
|        AGGH|2019-01-22|2019-01-16| <- One row we want to keep where file_date 22nd
|        AGGH|2019-01-22|2019-01-17|
|        AGGH|2019-01-22|2019-01-18|
|        AGGH|2019-01-22|2019-01-19|
|        AGGH|2019-01-22|2019-01-20|
|        AGGH|2019-01-22|2019-01-21|
|        AGGH|2019-01-22|2019-01-22|


file_2_20190123 - df2

+------------+----------+----------+
|station_code| file_date| data_date|
+------------+----------+----------+
|        AGGH|2019-01-23|2019-01-17| \/ ALL rows we want to keep where file_date 23rd
|        AGGH|2019-01-23|2019-01-18|
|        AGGH|2019-01-23|2019-01-19|
|        AGGH|2019-01-23|2019-01-20|
|        AGGH|2019-01-23|2019-01-21|
|        AGGH|2019-01-23|2019-01-22|
|        AGGH|2019-01-23|2019-01-23|

This will require us to sort and concat df's, then deduplicate them on all columns but one. Let me walk you through.

union_df = df1.union(df2) \
                .sort(['station_code', 'data_date'], ascending=[True, True])

+------------+----------+----------+
|station_code| file_date| data_date|
+------------+----------+----------+
|        AGGH|2019-01-22|2019-01-16| <- keep
|        AGGH|2019-01-23|2019-01-17| <- keep
|        AGGH|2019-01-22|2019-01-17| x- drop
|        AGGH|2019-01-22|2019-01-18| x- drop
|        AGGH|2019-01-23|2019-01-18| <- keep
|        AGGH|2019-01-23|2019-01-19| <- keep
|        AGGH|2019-01-22|2019-01-19| x- drop
|        AGGH|2019-01-23|2019-01-20| <- keep
|        AGGH|2019-01-22|2019-01-20| x- drop
|        AGGH|2019-01-22|2019-01-21| x- drop
|        AGGH|2019-01-23|2019-01-21| <- keep
|        AGGH|2019-01-23|2019-01-22| <- keep
|        AGGH|2019-01-22|2019-01-22| x- drop
|        AGGH|2019-01-23|2019-01-23| <- keep

Here we drop already sorted duped rows excluding keys ['file_date', 'data_date'].

nonduped_union_df = union_df \
            .drop_duplicates(['station_code', 'data_date', 'time_zone', 
                              'latitude', 'longitude', 'elevation', 
                              'highest_temperature', 'lowest_temperature', 
                              'highest_temperature_10_year_normal', 
                              'another_50_columns'])

And the result holds ONE row with earliest date from DF1 which is not in DF2 and ALL rows from DF2

nonduped_union_df.select(['station_code', 'file_date', 'data_date', 
                          'highest_temperature', 'lowest_temperature']) \
                         .sort(['station_code', 'data_date'], ascending=[True, True]) \
                         .show(30)


+------------+----------+----------+-------------------+------------------+
|station_code| file_date| data_date|highest_temperature|lowest_temperature|
+------------+----------+----------+-------------------+------------------+
|        AGGH|2019-01-22|2019-01-16|                 90|                77| <- df1 22nd
|        AGGH|2019-01-23|2019-01-17|                 90|                77| \/- df2 23rd
|        AGGH|2019-01-23|2019-01-18|                 91|                75|
|        AGGH|2019-01-23|2019-01-19|                 88|                77|
|        AGGH|2019-01-23|2019-01-20|                 88|                77|
|        AGGH|2019-01-23|2019-01-21|                 88|                77|
|        AGGH|2019-01-23|2019-01-22|                 90|                75|
|        AGGH|2019-01-23|2019-01-23|                 90|                75|
|        CWCA|2019-01-22|2019-01-15|                 23|                -2|
|        CWCA|2019-01-23|2019-01-16|                  7|                -8|
|        CWCA|2019-01-23|2019-01-17|                 28|                -6|
|        CWCA|2019-01-23|2019-01-18|                  0|               -13|
|        CWCA|2019-01-23|2019-01-19|                 25|               -15|
|        CWCA|2019-01-23|2019-01-20|                 -4|               -18|
|        CWCA|2019-01-23|2019-01-21|                 27|                -6|
|        CWCA|2019-01-22|2019-01-22|                 30|                17|
|        CWCA|2019-01-23|2019-01-22|                 30|                13|
|        CWCO|2019-01-22|2019-01-15|                 34|                29|
|        CWCO|2019-01-23|2019-01-16|                 33|                13|
|        CWCO|2019-01-22|2019-01-16|                 33|                13|
|        CWCO|2019-01-22|2019-01-17|                 23|                 7|
|        CWCO|2019-01-23|2019-01-17|                 23|                 7|
+------------+----------+----------+-------------------+------------------+
only showing top 30 rows

It may not be best suitable answer for this case, but it's the one worked for me.

Let me know, if stuck somewhere.

BTW - if anyone can tell me how to select all columns in a df, except one without listing them in a list - I will be very thankful.

Regards G

like image 24
Gunay Anach Avatar answered Oct 05 '22 18:10

Gunay Anach


You can use a window with row_number:

import pandas as pd
df1 = pd.DataFrame({'col1':range(0,5)})
df1['datestr'] = '2018-01-01'
df2 = pd.DataFrame({'col1':range(0,5)})
df2['datestr'] = '2018-02-01'
df3 = pd.DataFrame({'col1':range(0,5)})
df3['datestr'] = '2018-03-01'
dfall = spark.createDataFrame(pd.concat([df1,df2,df3]))

from pyspark.sql.window import Window
from pyspark.sql.functions import rank, col,row_number
window = Window.partitionBy('col1').orderBy(col('datestr'))
dfall.select('*', row_number().over(window).alias('posicion')).show()
dfall.select('*', row_number().over(window).alias('posicion')).where('posicion ==1').show()

+----+----------+--------+
|col1|   datestr|posicion|
+----+----------+--------+
|   0|2018-01-01|       1|
|   0|2018-02-01|       2|
|   0|2018-03-01|       3|
|   1|2018-01-01|       1|
|   1|2018-02-01|       2|
|   1|2018-03-01|       3|
|   3|2018-01-01|       1|
|   3|2018-02-01|       2|
|   3|2018-03-01|       3|
|   2|2018-01-01|       1|
|   2|2018-02-01|       2|
|   2|2018-03-01|       3|
|   4|2018-01-01|       1|
|   4|2018-02-01|       2|
|   4|2018-03-01|       3|
+----+----------+--------+
+----+----------+--------+
|col1|   datestr|posicion|
+----+----------+--------+
|   0|2018-01-01|       1|
|   1|2018-01-01|       1|
|   3|2018-01-01|       1|
|   2|2018-01-01|       1|
|   4|2018-01-01|       1|
+----+----------+--------+
like image 41
Carmen Pérez Carrillo Avatar answered Oct 05 '22 18:10

Carmen Pérez Carrillo


I would try this way:

Assuming your data_df looks like this, and we want to keep the rows with the highest value in col1 per datestr:

  col1     datestr
     0  2018-01-01
     1  2018-01-01
     2  2018-01-01
     3  2018-01-01
     4  2018-01-01
     0  2018-02-01
     1  2018-02-01
     2  2018-02-01
     3  2018-02-01
     4  2018-02-01
     0  2018-03-01
     1  2018-03-01
     2  2018-03-01
     3  2018-03-01
     4  2018-03-01

you can do:

from pyspark.sql import Window 
import pyspark.sql.functions as F

w = Window.partitionBy('datestr')
data_df = data_df.withColumn("max", F.max(F.col("col1"))\
    .over(w))\
    .where(F.col('max') == F.col('col1'))\
    .drop("max")

this results in:

  col1     datestr
     4  2018-01-01
     4  2018-02-01
     4  2018-03-01
like image 36
Into Numbers Avatar answered Oct 05 '22 17:10

Into Numbers