I have a dataframe like this:
import pandas as pd
records = [{'Name':'John', 'Country':'Canada'}, {'Name':'John', 'Country':'Canada'},
{'Name':'Mary', 'Country':'US'}, {'Name':'Mary', 'Country':'Canada'},
{'Name':'Mary', 'Country':'US'}, {'Name':'Stan', 'Country':'UK'},
{'Name':'Stan', 'Country':'UK'}]
df = pd.DataFrame(records)
df
Name Country
0 John Canada
1 John Canada
2 Mary US
3 Mary Canada
4 Mary US
5 Stan UK
6 Stan UK
I want to test for Names that have differing values for Country. In this case, I only want to see Mary, since she has both US and Canada in the country column. I can exclude John and Stan since both of their records are for the same country.
Any ideas how to accomplish this?
First step is to find the names that have more than 1 unique Country
, and then you can use loc
on your dataframe to filter in only those values.
Method 1: groupby
# groupby name and return a boolean of whether each has more than 1 unique Country
multi_country = df.groupby(["Name"]).Country.nunique().gt(1)
# use loc to only see those values that have `True` in `multi_country`:
df.loc[df.Name.isin(multi_country[multi_country].index)]
Name Country
2 Mary US
3 Mary Canada
4 Mary US
Method 2: drop_duplicates
and value_counts
You can follow the same logic, but use drop_duplicates
and value_counts
instead of groupby:
multi_country = df.drop_duplicates().Name.value_counts().gt(1)
df.loc[df.Name.isin(multi_country[multi_country].index)]
Name Country
2 Mary US
3 Mary Canada
4 Mary US
Method 3: drop_duplicates
and duplicated
Note: this will give slightly different results: you'll only see Mary's unique values, this may or may not be desired...
You can drop the duplicates in the original frame, and return only the names that have multiple entries in the deduped frame:
no_dups = df.drop_duplicates()
no_dups[no_dups.duplicated(keep = False, subset="Name")]
Name Country
2 Mary US
3 Mary Canada
First you can groupby Name
column then combine group Country
column into list. Then check if values in this list are all the same.
Moreover, you can use boolean indexing to select the rows which have different values.
s = df.groupby('Name')['Country'].agg(list).apply(lambda l: all(map(lambda x: x == l[0], l)))
df_ = df[df['Name'].isin(s[~s].index)].drop_duplicates()
print(df_)
Name Country
2 Mary US
3 Mary Canada
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With