Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

python pandas: diff between 2 dates in a groupby

Using Python 3.6 and Pandas 0.19.2:

I have a DataFrame containing parsed log files for transactions. Each line is timestamped, contains a transactionid, and can either represent the beginning or the end of a transaction (so each transactionid has 1 line for start and 1 line for end).

Additional infos can also be present in each end line.

I would like to extract the duration of each transaction by substracting end date with startdate, and keep the additional infos.

Sample input:

import pandas as pd
import io
df = pd.read_csv(io.StringIO('''transactionid;event;datetime;info
1;START;2017-04-01 00:00:00;
1;END;2017-04-01 00:00:02;foo1
2;START;2017-04-01 00:00:02;
3;START;2017-04-01 00:00:02;
2;END;2017-04-01 00:00:03;foo2
4;START;2017-04-01 00:00:03;
3;END;2017-04-01 00:00:03;foo3
4;END;2017-04-01 00:00:04;foo4'''), sep=';', parse_dates=['datetime'])

Which gives the following DataFrame:

   transactionid  event             datetime  info
0              1  START  2017-04-01 00:00:00   NaN
1              1    END  2017-04-01 00:00:02  foo1
2              2  START  2017-04-01 00:00:02   NaN
3              3  START  2017-04-01 00:00:02   NaN
4              2    END  2017-04-01 00:00:03  foo2
5              4  START  2017-04-01 00:00:03   NaN
6              3    END  2017-04-01 00:00:03  foo3
7              4    END  2017-04-01 00:00:04  foo4

Expected output:

A new dataframe such as:

   transactionid           start_date             end_date  duration  info
0              1  2017-04-01 00:00:00  2017-04-01 00:00:02  00:00:02  foo1
1              2  2017-04-01 00:00:02  2017-04-01 00:00:03  00:00:01  foo2
2              3  2017-04-01 00:00:02  2017-04-01 00:00:03  00:00:01  foo3
3              4  2017-04-01 00:00:03  2017-04-01 00:00:04  00:00:01  foo4

What I have tried:

Since 2 consecutives lines are not always related to the same transaction, I applied a .groupby(by='transactionid') to my dataframe. I am now stuck trying to "flatten" each group according to my needs.

like image 969
Guillaume Avatar asked Apr 25 '17 12:04

Guillaume


1 Answers

try this:

df.datetime = pd.to_datetime(df.datetime)

funcs = {
    'datetime':{
        'start_date':   'min',
        'end_date':     'max',
        'duration':     lambda x: x.max() - x.min(),
    },
    'info':             'last'
}

df.groupby(by='transactionid')['datetime','info'].agg(funcs).reset_index()

Result:

In [103]: df.groupby(by='transactionid')['datetime','info'].agg(funcs).reset_index()
Out[103]:
   transactionid          start_date            end_date  duration  last
0              1 2017-04-01 00:00:00 2017-04-01 00:00:02  00:00:02  foo1
1              2 2017-04-01 00:00:02 2017-04-01 00:00:03  00:00:01  foo2
2              3 2017-04-01 00:00:02 2017-04-01 00:00:03  00:00:01  foo3
3              4 2017-04-01 00:00:03 2017-04-01 00:00:04  00:00:01  foo4
like image 191
MaxU - stop WAR against UA Avatar answered Nov 09 '22 09:11

MaxU - stop WAR against UA