Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pandas count the number of times an event has occurred in last n days by group

Tags:

python

pandas

I have table of events occurring by id. How would I count the number of times in the last n days that each event type has occurred prior to the current row?

For example with a list of events like:

df = pd.DataFrame([{'id': 1, 'event_day': '2016-01-01', 'event_type': 'type1'},
{'id': 1, 'event_day': '2016-01-02', 'event_type': 'type1'},
{'id': 2, 'event_day': '2016-02-01', 'event_type': 'type2'},
{'id': 2, 'event_day': '2016-02-15', 'event_type': 'type3'},
{'id': 3, 'event_day': '2016-01-06', 'event_type': 'type3'},
{'id': 3, 'event_day': '2016-03-11', 'event_type': 'type3'},])
df['event_day'] = pd.to_datetime(df['event_day'])
df = df.sort_values(['id', 'event_day'])

or:

   event_day event_type  id
0 2016-01-01      type1   1
1 2016-01-02      type1   1
2 2016-02-01      type2   2
3 2016-02-15      type3   2
4 2016-01-06      type3   3
5 2016-03-11      type3   3

by id I want to count the number of times each event_type has occurred prior to the current row in the last n days. For example, in row 3 id=2, so how many times up to (but not including) that point in the event history have events types 1, 2, and 3 occurred in the last n days for id 2?

The desired output would look something like below:

    event_day   event_type  event_type1_in_last_30days  event_type2_in_last_30days  event_type3_in_last_30days  id
0   2016-01-01  type1       0                           0                           0                           1
1   2016-01-02  type1       1                           0                           0                           1
2   2016-02-01  type2       0                           0                           0                           2
3   2016-02-15  type3       0                           1                           0                           2
4   2016-01-06  type3       0                           0                           0                           3
5   2016-03-11  type3       0                           0                           0                           3
like image 821
matt_k Avatar asked Aug 02 '16 17:08

matt_k


2 Answers

res = ((((df['event_day'].values >= df['event_day'].values[:, None] - pd.to_timedelta('30 days')) 
        & (df['event_day'].values < df['event_day'].values[:, None]))
        & (df['id'].values == df['id'].values[:, None]))
        .dot(pd.get_dummies(df['event_type'])))
res
Out: 
array([[ 0.,  0.,  0.],
       [ 1.,  0.,  0.],
       [ 0.,  0.,  0.],
       [ 0.,  1.,  0.],
       [ 0.,  0.,  0.],
       [ 0.,  0.,  0.]])

The first part is to generate a matrix as follows:

(df['event_day'].values >= df['event_day'].values[:, None] - pd.to_timedelta('30 days'))
Out: 
array([[ True,  True,  True,  True,  True,  True],
       [ True,  True,  True,  True,  True,  True],
       [False,  True,  True,  True,  True,  True],
       [False, False,  True,  True, False,  True],
       [ True,  True,  True,  True,  True,  True],
       [False, False, False,  True, False,  True]], dtype=bool)

It's a 6x6 matrix and for each row it makes a comparison against the other rows. It makes use of NumPy's broadcasting for pairwise comparision (.values[:, None] adds another axis). To make it complete, we need to check if this row occurs sooner than the other row as well:

(((df['event_day'].values >= df['event_day'].values[:, None] - pd.to_timedelta('30 days')) 
   & (df['event_day'].values < df['event_day'].values[:, None])))
Out: 
array([[False, False, False, False, False, False],
       [ True, False, False, False, False, False],
       [False,  True, False, False,  True, False],
       [False, False,  True, False, False, False],
       [ True,  True, False, False, False, False],
       [False, False, False,  True, False, False]], dtype=bool)

Another condition is about the id's. Using a similar approach, you can construct a pairwise comparison matrix that shows when id's match:

(df['id'].values == df['id'].values[:, None])
Out: 
array([[ True,  True, False, False, False, False],
       [ True,  True, False, False, False, False],
       [False, False,  True,  True, False, False],
       [False, False,  True,  True, False, False],
       [False, False, False, False,  True,  True],
       [False, False, False, False,  True,  True]], dtype=bool)

It becomes:

(((df['event_day'].values >= df['event_day'].values[:, None] - pd.to_timedelta('30 days')) 
    & (df['event_day'].values < df['event_day'].values[:, None]))
    & (df['id'].values == df['id'].values[:, None]))
Out: 
array([[False, False, False, False, False, False],
       [ True, False, False, False, False, False],
       [False, False, False, False, False, False],
       [False, False,  True, False, False, False],
       [False, False, False, False, False, False],
       [False, False, False, False, False, False]], dtype=bool)

Lastly, you want to see it for each type so you can use get_dummies:

pd.get_dummies(df['event_type'])
Out: 
   type1  type2  type3
0    1.0    0.0    0.0
1    1.0    0.0    0.0
2    0.0    1.0    0.0
3    0.0    0.0    1.0
4    0.0    0.0    1.0
5    0.0    0.0    1.0

If you multiply the resulting matrix with this one, it should give you the number of rows satisfying that condition for each type. You can pass the resulting array to a DataFrame constructor and concat:

pd.concat([df, pd.DataFrame(res, columns = ['e1', 'e2', 'e3'])], axis=1)
Out: 
   event_day event_type  id   e1   e2   e3
0 2016-01-01      type1   1  0.0  0.0  0.0
1 2016-01-02      type1   1  1.0  0.0  0.0
2 2016-02-01      type2   2  0.0  0.0  0.0
3 2016-02-15      type3   2  0.0  1.0  0.0
4 2016-01-06      type3   3  0.0  0.0  0.0
5 2016-03-11      type3   3  0.0  0.0  0.0
like image 108
ayhan Avatar answered Nov 05 '22 09:11

ayhan


Ok, I really enjoyed ayhan's approach. But I have another which is probably slower (just my assumption that apply is usually slow), although I think the logic is more straightforward. If anyone wants to try to compare the two, especially how they scale, I'd be very interested:

In [1]: import pandas as pd, numpy as np

In [2]: df = pd.DataFrame([{'id': 1, 'event_day': '2016-01-01', 'event_type': 'type1'},
{'id': 1, 'event_day': '2016-01-02', 'event_type': 'type1'},
{'id': 2, 'event_day': '2016-02-01', 'event_type': 'type2'},
{'id': 2, 'event_day': '2016-02-15', 'event_type': 'type3'},
{'id': 3, 'event_day': '2016-01-06', 'event_type': 'type3'},
{'id': 3, 'event_day': '2016-03-11', 'event_type': 'type3'},])

In [3]: df['event_day'] = pd.to_datetime(df['event_day'])

In [4]: df = df.sort_values(['id', 'event_day'])

In [5]: dummies = pd.get_dummies(df)

In [6]: dummies.set_index('event_day', inplace=True)

In [7]: dummies
Out[7]: 
            id  event_type_type1  event_type_type2  event_type_type3
event_day                                                           
2016-01-01   1               1.0               0.0               0.0
2016-01-02   1               1.0               0.0               0.0
2016-02-01   2               0.0               1.0               0.0
2016-02-15   2               0.0               0.0               1.0
2016-01-06   3               0.0               0.0               1.0
2016-03-11   3               0.0               0.0               1.0

In [8]: import datetime

In [9]: delta30 = datetime.timedelta(days=30)

In [10]: delta1 = datetime.timedelta(days=1)

In [11]: dummies.apply(lambda x: dummies[dummies.id == x.id].loc[x.name - delta30:x.name - delta1].sum() ,axis=1)
Out[11]: 
             id  event_type_type1  event_type_type2  event_type_type3
event_day                                                            
2016-01-01  0.0               0.0               0.0               0.0
2016-01-02  1.0               1.0               0.0               0.0
2016-02-01  0.0               0.0               0.0               0.0
2016-02-15  2.0               0.0               1.0               0.0
2016-01-06  0.0               0.0               0.0               0.0
2016-03-11  0.0               0.0               0.0               0.0

Finally, you can merge dummies and your original dataframe after dropping the 'id' column in dummies:

In [12]: dummies.drop('id', inplace = True,axis=1)

In [13]: dummies
Out[13]: 
   event_day  event_type_type1  event_type_type2  event_type_type3
0 2016-01-01               0.0               0.0               0.0
1 2016-01-02               1.0               0.0               0.0
2 2016-02-01               0.0               0.0               0.0
3 2016-02-15               0.0               1.0               0.0
4 2016-01-06               0.0               0.0               0.0
5 2016-03-11               0.0               0.0               0.0

In [14]: pd.merge(df, dummies, on="event_day")
Out[14]: 
   event_day event_type  id  event_type_type1  event_type_type2  \
0 2016-01-01      type1   1               0.0               0.0   
1 2016-01-02      type1   1               1.0               0.0   
2 2016-02-01      type2   2               0.0               0.0   
3 2016-02-15      type3   2               0.0               1.0   
4 2016-01-06      type3   3               0.0               0.0   
5 2016-03-11      type3   3               0.0               0.0   

   event_type_type3  
0               0.0  
1               0.0  
2               0.0  
3               0.0  
4               0.0  
5               0.0 
like image 31
juanpa.arrivillaga Avatar answered Nov 05 '22 08:11

juanpa.arrivillaga