I have a dataframe which looks roughly like this:
>>> data
price currency
id
2 1050 EU
5 1400 EU
4 1750 EU
8 4000 EU
7 630 GBP
1 1000 GBP
9 1400 GBP
3 2000 USD
6 7000 USD
I need to get a new dataframe with n
top-priced products for each currency, where n
depends on currency and is given in another dataframe:
>>> select_number
number_to_select
currency
GBP 2
EU 2
USD 1
If I had to select the same number of top-priced elements, I could group the data by currency with pandas.groupby
and then use head
method of a grouped object.
However, head
accepts only a number, not an array or some expression.
Of course, I can write a for
loop, but this would we very awkward and inefficient way to do it.
How can do this in a good way?
You can use:
data = pd.DataFrame({'id': {0: 2, 1: 5, 2: 4, 3: 8, 4: 7, 5: 1, 6: 9, 7: 3, 8: 6}, 'price': {0: 1050, 1: 1400, 2: 1750, 3: 4000, 4: 630, 5: 1000, 6: 1400, 7: 2000, 8: 7000}, 'currency': {0: 'EU', 1: 'EU', 2: 'EU', 3: 'EU', 4: 'GBP', 5: 'GBP', 6: 'GBP', 7: 'USD', 8: 'USD'}})
select_number = pd.DataFrame({'number_to_select': {'USD': 1, 'GBP': 2, 'EU': 2}})
print (data)
currency id price
0 EU 2 1050
1 EU 5 1400
2 EU 4 1750
3 EU 8 4000
4 GBP 7 630
5 GBP 1 1000
6 GBP 9 1400
7 USD 3 2000
8 USD 6 7000
print (select_number)
number_to_select
EU 2
GBP 2
USD 1
Solution with mapping by dict
:
d = select_number.to_dict()
d1 = d['number_to_select']
print (d1)
{'USD': 1, 'EU': 2, 'GBP': 2}
print (data.groupby('currency').apply(lambda dfg: dfg.nlargest(d1[dfg.name],'price'))
.reset_index(drop=True))
currency id price
0 EU 8 4000
1 EU 4 1750
2 GBP 9 1400
3 GBP 1 1000
4 USD 6 7000
Solution2:
print (data.groupby('currency')
.apply(lambda dfg: (dfg.nlargest(select_number
.loc[dfg.name, 'number_to_select'], 'price')))
.reset_index(drop=True))
id price currency
0 8 4000 EU
1 4 1750 EU
2 9 1400 GBP
3 1 1000 GBP
4 6 7000 USD
Explanation:
I think for debugging is the best use function f
with print
:
def f(dfg):
#dfg is DataFrame
print (dfg)
#name of group
print (dfg.name)
#select value from select_number
print (select_number.loc[dfg.name, 'number_to_select'])
#return top rows per groups
print (dfg.nlargest(select_number.loc[dfg.name, 'number_to_select'], 'price'))
return (dfg.nlargest(select_number.loc[dfg.name, 'number_to_select'], 'price'))
print (data.groupby('currency').apply(f))
currency id price
0 EU 2 1050
1 EU 5 1400
2 EU 4 1750
3 EU 8 4000
currency id price
0 EU 2 1050
1 EU 5 1400
2 EU 4 1750
3 EU 8 4000
EU
2
currency id price
3 EU 8 4000
2 EU 4 1750
currency id price
4 GBP 7 630
5 GBP 1 1000
6 GBP 9 1400
GBP
2
currency id price
6 GBP 9 1400
5 GBP 1 1000
currency id price
7 USD 3 2000
8 USD 6 7000
USD
1
currency id price
8 USD 6 7000
currency id price
currency
EU 3 EU 8 4000
2 EU 4 1750
GBP 6 GBP 9 1400
5 GBP 1 1000
USD 8 USD 6 7000
Here is a solution:
select_number = select_number['number_to_select'] # easier to select from series
df.groupby('currency').apply(
lambda dfg: dfg.nlargest(select_number[dfg.name], columns='price')
)
Edit - I got my answer from jezrael's answer: I replaced dfg.currency.iloc[0]
with dfg.name
.
Second edit - As pointed out in the comments, select_number
is a dataframe, so I convert it to a series first.
MaxU and jezrael, thanks for your comments!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With