Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Drop all duplicate rows across multiple columns in Python Pandas

The pandas drop_duplicates function is great for "uniquifying" a dataframe. However, one of the keyword arguments to pass is take_last=True or take_last=False, while I would like to drop all rows which are duplicates across a subset of columns. Is this possible?

    A   B   C
0   foo 0   A
1   foo 1   A
2   foo 1   B
3   bar 1   A

As an example, I would like to drop rows which match on columns A and C so this should drop rows 0 and 1.

like image 670
Jamie Bull Avatar asked May 15 '14 00:05

Jamie Bull


People also ask

How can Pandas drop rows based on multiple conditions?

We can drop single or multiple columns from the dataframe just by passing the name of columns and by setting up the axis =1.

How do you delete common rows in two Dataframes in Pandas?

You can use pandas. concat to concatenate the two dataframes rowwise, followed by drop_duplicates to remove all the duplicated rows in them.

How do I extract duplicate rows in Python?

The pandas. DataFrame. duplicated() method is used to find duplicate rows in a DataFrame. It returns a boolean series which identifies whether a row is duplicate or unique.


Video Answer


6 Answers

This is much easier in pandas now with drop_duplicates and the keep parameter.

import pandas as pd
df = pd.DataFrame({"A":["foo", "foo", "foo", "bar"], "B":[0,1,1,1], "C":["A","A","B","A"]})
df.drop_duplicates(subset=['A', 'C'], keep=False)
like image 135
Ben Avatar answered Sep 30 '22 17:09

Ben


Just want to add to Ben's answer on drop_duplicates:

keep : {‘first’, ‘last’, False}, default ‘first’

  • first : Drop duplicates except for the first occurrence.

  • last : Drop duplicates except for the last occurrence.

  • False : Drop all duplicates.

So setting keep to False will give you desired answer.

DataFrame.drop_duplicates(*args, **kwargs) Return DataFrame with duplicate rows removed, optionally only considering certain columns

Parameters: subset : column label or sequence of labels, optional Only consider certain columns for identifying duplicates, by default use all of the columns keep : {‘first’, ‘last’, False}, default ‘first’ first : Drop duplicates except for the first occurrence. last : Drop duplicates except for the last occurrence. False : Drop all duplicates. take_last : deprecated inplace : boolean, default False Whether to drop duplicates in place or to return a copy cols : kwargs only argument of subset [deprecated] Returns: deduplicated : DataFrame

like image 24
Jake Avatar answered Oct 01 '22 17:10

Jake


If you want result to be stored in another dataset:

df.drop_duplicates(keep=False)

or

df.drop_duplicates(keep=False, inplace=False)

If same dataset needs to be updated:

df.drop_duplicates(keep=False, inplace=True)

Above examples will remove all duplicates and keep one, similar to DISTINCT * in SQL

like image 30
Ramanujam Allam Avatar answered Sep 30 '22 17:09

Ramanujam Allam


use groupby and filter

import pandas as pd
df = pd.DataFrame({"A":["foo", "foo", "foo", "bar"], "B":[0,1,1,1], "C":["A","A","B","A"]})
df.groupby(["A", "C"]).filter(lambda df:df.shape[0] == 1)
like image 28
HYRY Avatar answered Sep 30 '22 17:09

HYRY


Try these various things

df = pd.DataFrame({"A":["foo", "foo", "foo", "bar","foo"], "B":[0,1,1,1,1], "C":["A","A","B","A","A"]})

>>>df.drop_duplicates( "A" , keep='first')

or

>>>df.drop_duplicates( keep='first')

or

>>>df.drop_duplicates( keep='last')
like image 27
Priyansh gupta Avatar answered Sep 30 '22 17:09

Priyansh gupta


Actually, drop rows 0 and 1 only requires (any observations containing matched A and C is kept.):

In [335]:

df['AC']=df.A+df.C
In [336]:

print df.drop_duplicates('C', take_last=True) #this dataset is a special case, in general, one may need to first drop_duplicates by 'c' and then by 'a'.
     A  B  C    AC
2  foo  1  B  fooB
3  bar  1  A  barA

[2 rows x 4 columns]

But I suspect what you really want is this (one observation containing matched A and C is kept.):

In [337]:

print df.drop_duplicates('AC')
     A  B  C    AC
0  foo  0  A  fooA
2  foo  1  B  fooB
3  bar  1  A  barA

[3 rows x 4 columns]

Edit:

Now it is much clearer, therefore:

In [352]:
DG=df.groupby(['A', 'C'])   
print pd.concat([DG.get_group(item) for item, value in DG.groups.items() if len(value)==1])
     A  B  C
2  foo  1  B
3  bar  1  A

[2 rows x 3 columns]
like image 34
CT Zhu Avatar answered Sep 29 '22 17:09

CT Zhu