Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pandas find Duplicates in cross values

I have a dataframe and want to eliminate duplicate rows, that have same values, but in different columns:

df = pd.DataFrame(columns=['a','b','c','d'], index=['1','2','3'])
df.loc['1'] = pd.Series({'a':'x','b':'y','c':'e','d':'f'})
df.loc['2'] = pd.Series({'a':'e','b':'f','c':'x','d':'y'})
df.loc['3'] = pd.Series({'a':'w','b':'v','c':'s','d':'t'})

df
Out[8]: 
   a  b  c  d
1  x  y  e  f
2  e  f  x  y
3  w  v  s  t

Rows [1],[2] have the values {x,y,e,f}, but they are arranged in a cross - i.e. if you would exchange columns c,d with a,b in row [2] you would have a duplicate. I want to drop these lines and only keep one, to have the final output:

df_new
Out[20]: 
   a  b  c  d
1  x  y  e  f
3  w  v  s  t

How can I efficiently achieve that?

like image 688
maxtenzin Avatar asked Jan 09 '18 10:01

maxtenzin


3 Answers

I think you need filter by boolean indexing with mask created by numpy.sort with duplicated, for invert it use ~:

df = df[~pd.DataFrame(np.sort(df, axis=1), index=df.index).duplicated()]
print (df)
   a  b  c  d
1  x  y  e  f
3  w  v  s  t

Detail:

print (np.sort(df, axis=1))
[['e' 'f' 'x' 'y']
 ['e' 'f' 'x' 'y']
 ['s' 't' 'v' 'w']]

print (pd.DataFrame(np.sort(df, axis=1), index=df.index))
   0  1  2  3
1  e  f  x  y
2  e  f  x  y
3  s  t  v  w

print (pd.DataFrame(np.sort(df, axis=1), index=df.index).duplicated())
1    False
2     True
3    False
dtype: bool

print (~pd.DataFrame(np.sort(df, axis=1), index=df.index).duplicated())

1     True
2    False
3     True
dtype: bool
like image 148
jezrael Avatar answered Nov 11 '22 18:11

jezrael


Here's another solution, with a for loop:

data = df.as_matrix()
new = []

for row in data:
    if not new:
        new.append(row)
    else:
        if not any([c in nrow for nrow in new for c in row]):
            new.append(row)
new_df = pd.DataFrame(new, columns=df.columns)
like image 1
zipa Avatar answered Nov 11 '22 18:11

zipa


Use sorting(np.sort) and then get duplicates(.duplicated()) out of it. Later use that duplicates to drop(df.drop) the required index

import pandas as pd
import numpy as np
df = pd.DataFrame(columns=['a','b','c','d'], index=['1','2','3'])
df.loc['1'] = pd.Series({'a':'x','b':'y','c':'e','d':'f'})
df.loc['2'] = pd.Series({'a':'e','b':'f','c':'x','d':'y'})
df.loc['3'] = pd.Series({'a':'w','b':'v','c':'s','d':'t'})

df_duplicated = pd.DataFrame(np.sort(df, axis=1), index=df.index).duplicated()
index_to_drop = [ind for ind in range(len(df_duplicated)) if df_duplicated[ind]]
df.drop(df.index[df_duplicated])
like image 1
Shrey Avatar answered Nov 11 '22 18:11

Shrey