Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Select rows from a Pandas DataFrame with same values in one column but different value in the other column

Say I have the pandas DataFrame below:

   A      B     C   D
1  foo    one   0   0
2  foo    one   2   4
3  foo    two   4   8
4  cat    one   8   4
5  bar    four  6  12
6  bar    three 7  14
7  bar    four  7  14

I would like to select all the rows that have equal values in A but differing values in B. So I would like the output of my code to be:

   A      B    C   D
1  foo    one  0   0
3  foo    two  4   8
5  bar  three  7  14
6  bar    four 7  14

What's the most efficient way to do this? I have approximately 11,000 rows with a lot of variation in the column values, but this situation comes up a lot. In my dataset, if elements in column A are equal then the corresponding column B value should also be equal, however due to mislabeling this is not the case and I would like to fix this, it would be impractical for me to do this one by one.

like image 351
tinman248 Avatar asked Jan 04 '19 17:01

tinman248


People also ask

How Pandas replace multiple values with one value in a column?

Pandas replace multiple values in column replace. By using DataFrame. replace() method we will replace multiple values with multiple new strings or text for an individual DataFrame column. This method searches the entire Pandas DataFrame and replaces every specified value.


3 Answers

You can try groupby() + filter + drop_duplicates():

>>> df.groupby('A').filter(lambda g: len(g) > 1).drop_duplicates(subset=['A', 'B'], keep="first")
     A      B  C   D
0  foo    one  0   0
2  foo    two  4   8
4  bar   four  6  12
5  bar  three  7  14

OR, in case you want to drop duplicates between the subset of columns A & B then can use below but that will have the row having cat as well.

>>> df.drop_duplicates(subset=['A', 'B'], keep="first")
     A      B  C   D
0  foo    one  0   0
2  foo    two  4   8
3  cat    one  8   4
4  bar   four  6  12
5  bar  three  7  14
like image 187
Karn Kumar Avatar answered Oct 13 '22 06:10

Karn Kumar


Use groupby + filter + head:

result = df.groupby('A').filter(lambda g: len(g) > 1).groupby(['A', 'B']).head(1)
print(result)

Output

     A      B  C   D
0  foo    one  0   0
2  foo    two  4   8
4  bar   four  6  12
5  bar  three  7  14

The first group-by and filter will remove the rows with no duplicated A values (i.e. cat), the second will create groups with same A, B and for each of those get the first element.

like image 45
Dani Mesejo Avatar answered Oct 13 '22 08:10

Dani Mesejo


The current answers are correct and may be more sophisticated too. If you have complex criteria, filter function will be very useful. If you are like me and want to keep things simple, i feel following is more beginner friendly way

>>> df = pd.DataFrame({
    'A': ['foo', 'foo', 'foo', 'cat', 'bar', 'bar', 'bar'],
    'B': ['one', 'one', 'two', 'one', 'four', 'three', 'four'],
    'C': [0,2,4,8,6,7,7],
    'D': [0,4,8,4,12,14,14]
}, index=[1,2,3,4,5,6,7])

>>> df = df.drop_duplicates(['A', 'B'], keep='last')
    A       B       C   D
2   foo     one     2   4
3   foo     two     4   8
4   cat     one     8   4
6   bar     three   7   14
7   bar     four    7   14


>>> df = df[df.duplicated(['A'], keep=False)]
    A       B       C   D
2   foo     one     2   4
3   foo     two     4   8
6   bar     three   7   14
7   bar     four    7   14

keep='last' is optional here

like image 1
Ali Faizan Avatar answered Oct 13 '22 07:10

Ali Faizan