Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Better way to identify duplicates in a group in a Pandas dataframe? [duplicate]

I have a dataframe

   x  c
0  0  1
1  3  2
2  1  1
3  2  1
4  3  1
5  4  1
6  1  0
7  3  1
8  2  1
9  1  2

I would like to produce

   c  x duplicated
0  1  0      False
1  2  3      False
2  1  1      False
3  1  2       True
4  1  3       True
5  1  4      False
6  0  1      False
7  1  3       True
8  1  2       True
9  2  1      False

that is to group by c first, and mark all duplicated rows in the group.

My current approach is

c = np.random.randint(0, 3, 10)
x = np.random.randint(0, 5, 10)
d = pd.DataFrame({'x': x, 'c': c})
d['duplicated'] = d.groupby('c').apply(
    lambda x: x.duplicated(keep=False)
).reset_index(level=0, drop=True)

Is there any better way?

like image 810
colinfang Avatar asked Oct 26 '25 12:10

colinfang


1 Answers

Use duplicated only - by default it verify all columns:

d['duplicated'] = d.duplicated(keep=False)
print (d)
   x  c  duplicated
0  0  1       False
1  3  2       False
2  1  1       False
3  2  1        True
4  3  1        True
5  4  1       False
6  1  0       False
7  3  1        True
8  2  1        True
9  1  2       False

d['duplicated'] = d.duplicated(subset=['c','x'],keep=False)
print (d)
   x  c  duplicated
0  0  1       False
1  3  2       False
2  1  1       False
3  2  1        True
4  3  1        True
5  4  1       False
6  1  0       False
7  3  1        True
8  2  1        True
9  1  2       False
like image 106
jezrael Avatar answered Oct 29 '25 01:10

jezrael



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!