Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pandas groupby transform cumulative with conditions

I have a large table with many product id's and iso_codes: 2 million rows in total. So the answer should (if possible) also take into account memory issues, I have 16 GB memory.

I would like to see for every (id, iso_code) combination what the number of items returned is before the buy_date in the row (so cumulative), but there's a catch:
I only want to count returns that happened from previous sales where the return_date is before the buy_date I'm looking at.

I've added column items_returned as an example: this is the column that should be calculated.

The idea is as follows:
At the moment of the sale I can only count returns that have happened already, not the ones that will happen in the future.

I tried a combination of df.groupby(['id', 'iso_code']).transform(np.cumsum) and .transform(lambda x: only count returns that happened before my buy_date), but couldn't figure out how to do a .groupby.transform(np.cumsum) with these special conditions applying.

Similar question for items bought, where I only count items cumulative for days smaller than my buy_date.

Hope you can help me.

Example resulting table:

+-------+------+------------+----------+------------+---------------+----------------+------------------+
|   row |   id | iso_code   |   return | buy_date   | return_date   |   items_bought |   items_returned |
|-------+------+------------+----------+------------+---------------+----------------+------------------|
|     0 |  177 | DE         |        1 | 2019-05-16 | 2019-05-24    |              0 |                0 |
|     1 |  177 | DE         |        1 | 2019-05-29 | 2019-06-03    |              1 |                1 |
|     2 |  177 | DE         |        1 | 2019-10-27 | 2019-11-06    |              2 |                2 |
|     3 |  177 | DE         |        0 | 2019-11-06 | None          |              3 |                2 |
|     4 |  177 | DE         |        1 | 2019-11-18 | 2019-11-28    |              4 |                3 |
|     5 |  177 | DE         |        1 | 2019-11-21 | 2019-12-11    |              5 |                3 |
|     6 |  177 | DE         |        1 | 2019-11-25 | 2019-12-06    |              6 |                3 |
|     7 |  177 | DE         |        0 | 2019-11-30 | None          |              7 |                4 |
|     8 |  177 | DE         |        1 | 2020-04-30 | 2020-05-27    |              8 |                6 |
|     9 |  177 | DE         |        1 | 2020-04-30 | 2020-09-18    |              8 |                6 |
+-------+------+------------+----------+------------+---------------+----------------+------------------+

Sample code:

import pandas as pd
from io import StringIO

df_text = """
row id  iso_code    return  buy_date    return_date
0   177 DE  1   2019-05-16  2019-05-24
1   177 DE  1   2019-05-29  2019-06-03
2   177 DE  1   2019-10-27  2019-11-06
3   177 DE  0   2019-11-06  None
4   177 DE  1   2019-11-18  2019-11-28
5   177 DE  1   2019-11-21  2019-12-11
6   177 DE  1   2019-11-25  2019-12-06
7   177 DE  0   2019-11-30  None
8   177 DE  1   2020-04-30  2020-05-27
9   177 DE  1   2020-04-30  2020-09-18
"""

df = pd.read_csv(StringIO(df_text), sep='\t', index_col=0)

df['items_bought'] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 8]
df['items_returned'] = [0, 1, 2, 2, 3, 3, 3, 4, 6, 6]
like image 900
Sander van den Oord Avatar asked Nov 07 '22 04:11

Sander van den Oord


1 Answers

This seems to require a cross merge:

(df[['id','iso_code', 'buy_date']].reset_index()
   .merge(df[['id','iso_code', 'return','return_date','buy_date']], on=['id','iso_code'])
   .assign(items_returned=lambda x: x['return_date'].lt(x['buy_date_x'])*x['return'],
           items_bought=lambda x: x['buy_date_y'].lt(x['buy_date_x']))
   .groupby('row')[['items_bought','items_returned']].sum()
)

Output:

     items_bought  items_returned
row                              
0               0               0
1               1               1
2               2               2
3               3               2
4               4               3
5               5               3
6               6               3
7               7               4
8               8               6
9               8               6

Update for larger data, cross merge is not ideal due to memory requirement. We can then do a groupby() so we only merge on smaller groups:

def myfunc(df):
    return (df[['id','iso_code', 'buy_date']].reset_index()
   .merge(df[['id','iso_code', 'return','return_date','buy_date']], on=['id','iso_code'])
   .assign(items_returned=lambda x: x['return_date'].lt(x['buy_date_x'])*x['return'],
           items_bought=lambda x: x['buy_date_y'].lt(x['buy_date_x']))
   .groupby('row')[['items_bought','items_returned']].sum()
)

df.groupby(['id','iso_code']).apply(myfunc).reset_index(level=[0,1], drop=True)

And you would get the same output:

     items_bought  items_returned
row                              
0               0               0
1               1               1
2               2               2
3               3               2
4               4               3
5               5               3
6               6               3
7               7               4
8               8               6
9               8               6
like image 152
Quang Hoang Avatar answered Nov 14 '22 21:11

Quang Hoang