Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Merge pandas DataFrames based on irregular time intervals

I'm wondering how I can speed up a merge of two dataframes. One of the dataframes has time stamped data points (value col).

import pandas as pd
import numpy as np

data = pd.DataFrame({'time':np.sort(np.random.uniform(0,100,size=50)),
                     'value':np.random.uniform(-1,1,size=50)})

The other has time interval information (start_time, end_time, and associated interval_id).

intervals = pd.DataFrame({'interval_id':np.arange(9),
                          'start_time':np.random.uniform(0,5,size=9) + np.arange(0,90,10),    
                          'end_time':np.random.uniform(5,10,size=9) + np.arange(0,90,10)})

I'd like to merge these two dataframes more efficiently than the for loop below:

data['interval_id'] = np.nan
for index, ser in intervals.iterrows():
    in_interval = (data['time'] >= ser['start_time']) & \
                  (data['time'] <= ser['end_time'])
    data['interval_id'][in_interval] = ser['interval_id']

result = data.merge(intervals, how='outer').sort('time').reset_index(drop=True)

I keep imagining I'll be able to use pandas time series functionality, like a date range or TimeGrouper, but I have yet to figure out anything more pythonic (pandas-y?) than the above.

Example result:

     time      value     interval_id  start_time   end_time
0    0.575976  0.022727          NaN         NaN        NaN
1    4.607545  0.222568            0    3.618715   8.294847
2    5.179350  0.438052            0    3.618715   8.294847
3   11.069956  0.641269            1   10.301728  19.870283
4   12.387854  0.344192            1   10.301728  19.870283
5   18.889691  0.582946            1   10.301728  19.870283
6   20.850469 -0.027436          NaN         NaN        NaN
7   23.199618  0.731316            2   21.488868  28.968338
8   26.631284  0.570647            2   21.488868  28.968338
9   26.996397  0.597035            2   21.488868  28.968338
10  28.601867 -0.131712            2   21.488868  28.968338
11  28.660986  0.710856            2   21.488868  28.968338
12  28.875395 -0.355208            2   21.488868  28.968338
13  28.959320 -0.430759            2   21.488868  28.968338
14  29.702800 -0.554742          NaN         NaN        NaN

Any suggestions from time series-savvy people out there would be greatly appreciated.


Update, after Jeff's answer:

The main problem is that interval_id has no relation to any regular time interval (e.g., intervals are not always approximately 10 seconds). One interval could be 10 seconds, the next could be 2 seconds, and the next could be 100 seconds, so I can't use any regular rounding scheme as Jeff proposed. Unfortunately, my minimal example above does not make that clear.

like image 403
pedmiston Avatar asked Jan 26 '14 18:01

pedmiston


2 Answers

You could use np.searchsorted to find the indices representing where each value in data['time'] would fit between intervals['start_time']. Then you could call np.searchsorted again to find the indices representing where each value in data['time'] would fit between intervals['end_time']. Note that using np.searchsorted relies on interval['start_time'] and interval['end_time'] being in sorted order.

For each corresponding location in the arrays, where these two indices are equal, data['time'] fits in between interval['start_time'] and interval['end_time']. Note that this relies on the intervals being disjoint.

Using searchsorted in this way is about 5 times faster than using the for-loop:

import pandas as pd
import numpy as np

np.random.seed(1)
data = pd.DataFrame({'time':np.sort(np.random.uniform(0,100,size=50)),
                     'value':np.random.uniform(-1,1,size=50)})

intervals = pd.DataFrame(
    {'interval_id':np.arange(9),
     'start_time':np.random.uniform(0,5,size=9) + np.arange(0,90,10),    
     'end_time':np.random.uniform(5,10,size=9) + np.arange(0,90,10)})

def using_loop():
    data['interval_id'] = np.nan
    for index, ser in intervals.iterrows():
        in_interval = (data['time'] >= ser['start_time']) & \
                      (data['time'] <= ser['end_time'])
        data['interval_id'][in_interval] = ser['interval_id']

    result = data.merge(intervals, how='outer').sort('time').reset_index(drop=True)
    return result

def using_searchsorted():
    start_idx = np.searchsorted(intervals['start_time'].values, data['time'].values)-1
    end_idx = np.searchsorted(intervals['end_time'].values, data['time'].values)
    mask = (start_idx == end_idx)
    result = data.copy()
    result['interval_id'] = result['start_time'] = result['end_time'] = np.nan
    result['interval_id'][mask] = start_idx
    result.ix[mask, 'start_time'] = intervals['start_time'][start_idx[mask]].values
    result.ix[mask, 'end_time'] = intervals['end_time'][end_idx[mask]].values
    return result

In [254]: %timeit using_loop()
100 loops, best of 3: 7.74 ms per loop

In [255]: %timeit using_searchsorted()
1000 loops, best of 3: 1.56 ms per loop

In [256]: 7.74/1.56
Out[256]: 4.961538461538462
like image 93
unutbu Avatar answered Oct 21 '22 08:10

unutbu


you may want to have the intervals of 'time' specified slightly different, but should give you a start.

In [34]: data['on'] = np.round(data['time']/10)

In [35]: data.merge(intervals,left_on=['on'],right_on=['interval_id'],how='outer')
Out[35]: 
         time     value  on   end_time  interval_id  start_time
0    1.301658 -0.462594   0   7.630243            0    0.220746
1    2.202654  0.054903   0   7.630243            0    0.220746
2   10.253593  0.329947   1  17.715596            1   10.299464
3   13.803064 -0.601021   1  17.715596            1   10.299464
4   17.086290  0.484119   2  27.175455            2   24.710704
5   21.797655  0.988212   2  27.175455            2   24.710704
6   26.265165  0.491410   3  37.702968            3   30.670753
7   27.777182 -0.121691   3  37.702968            3   30.670753
8   34.066473  0.659260   3  37.702968            3   30.670753
9   34.786337 -0.230026   3  37.702968            3   30.670753
10  35.343021  0.364505   4  49.489028            4   42.948486
11  35.506895  0.953562   4  49.489028            4   42.948486
12  36.129951 -0.703457   4  49.489028            4   42.948486
13  38.794690 -0.510535   4  49.489028            4   42.948486
14  40.508702 -0.763417   4  49.489028            4   42.948486
15  43.974516 -0.149487   4  49.489028            4   42.948486
16  46.219554  0.893025   5  57.086065            5   53.124795
17  50.206860  0.729106   5  57.086065            5   53.124795
18  50.395082 -0.807557   5  57.086065            5   53.124795
19  50.410783  0.996247   5  57.086065            5   53.124795
20  51.602892  0.144483   5  57.086065            5   53.124795
21  52.006921 -0.979778   5  57.086065            5   53.124795
22  52.682896 -0.593500   5  57.086065            5   53.124795
23  52.836037  0.448370   5  57.086065            5   53.124795
24  53.052130 -0.227245   5  57.086065            5   53.124795
25  57.169775  0.659673   6  65.927106            6   61.590948
26  59.336176 -0.893004   6  65.927106            6   61.590948
27  60.297771  0.897418   6  65.927106            6   61.590948
28  61.151664  0.176229   6  65.927106            6   61.590948
29  61.769023  0.894644   6  65.927106            6   61.590948
30  64.221220  0.893012   6  65.927106            6   61.590948
31  67.907417 -0.859734   7  78.192671            7   72.463468
32  71.460483 -0.271364   7  78.192671            7   72.463468
33  74.514028  0.621174   7  78.192671            7   72.463468
34  75.822643 -0.351684   8  88.820139            8   83.183825
35  84.252778 -0.685043   8  88.820139            8   83.183825
36  84.838361  0.354365   8  88.820139            8   83.183825
37  85.770611 -0.089678   9        NaN          NaN         NaN
38  85.957559  0.649995   9        NaN          NaN         NaN
39  86.498339  0.569793   9        NaN          NaN         NaN
40  91.006735  0.731006   9        NaN          NaN         NaN
41  91.941862  0.964376   9        NaN          NaN         NaN
42  94.617522  0.626889   9        NaN          NaN         NaN
43  95.318288 -0.088918  10        NaN          NaN         NaN
44  95.595243  0.539685  10        NaN          NaN         NaN
45  95.818267 -0.989647  10        NaN          NaN         NaN
46  98.240444  0.931445  10        NaN          NaN         NaN
47  98.722869  0.442502  10        NaN          NaN         NaN
48  99.349198  0.585264  10        NaN          NaN         NaN
49  99.829372 -0.743697  10        NaN          NaN         NaN

[50 rows x 6 columns]
like image 26
Jeff Avatar answered Oct 21 '22 09:10

Jeff