I'm working with two data sets that have different dates associated with each. I want to merge them, but because the dates are not exact matches, I believe merge_asof()
is the best way to go.
However, two things happen with a merge_asof()
that are not ideal:
The following code is an example:
df_a = pd.DataFrame({'date':pd.to_datetime(['1/15/2016','3/15/2016','5/15/2016','7/15/2016'])})
df_b = pd.DataFrame({'date':pd.to_datetime(['1/1/2016','4/1/2016','5/1/2016','6/1/2016','7/1/2016']), 'num':[1,10,100,1000,10000]})
df_x = pd.merge_asof(df_a, df_b, on = 'date')
this yields:
date num
0 2016-01-15 1
1 2016-03-15 1
2 2016-05-15 100
3 2016-07-15 10000
but instead I would want:
date num
0 2016-01-15 1
1 2016-03-15 0
2 2016-05-15 110
3 2016-07-15 11000
...where sets of multiple rows that fall between dates are added up, and it isn't just that closest row that is chosen.
Is that possible with merge_asof()
or should I look for another solution?
You are asking for the rows from B that are between the previous and current row of A. I can get the first and last index pretty easily with this:
# get the previous dates from A:
prev_dates = np.roll(df_a.date, 1)
prev_dates[0] = pd.to_datetime(0)
# get the first and last index of B:
start = np.searchsorted(df_b.date, prev_dates)
stop = np.searchsorted(df_b.date, df_a.date, side='right') - 1
And now I can use a little list comprehension to get my results:
>>> [df_b.num.values[begin:end+1].sum() for begin, end in zip(start, stop)]
[1, 0, 110, 11000]
Thanks for posting this question. It prompted me to spend an educational couple of hours studying the merge_asof
source. I do not think that your solution can be improved considerably, but I would offer a couple of tweaks to speed it up a few percent.
# if we concat the original date vector, we will only need to merge once
df_ax = pd.concat([df_a, df_a.rename(columns={'date':'date1'})], axis=1)
# do the outer merge
df_m = pd.merge(df_ax, df_b, on='date', how='outer').sort_values(by='date')
# do a single rename, inplace
df_m.rename(columns={'date': 'datex', 'date1': 'date'}, inplace=True)
# fill the gaps to allow the groupby and sum
df_m['num'].fillna(0, inplace=True)
df_m['date'].fillna(method='bfill', inplace=True)
# roll up the results.
x = df_m.groupby('date').num.sum().reset_index()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With