Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pandas fill in missing date within each group with information in the previous row

Similar question to this one, but with some modifications:

Instead of filling in missing dates for each group between the min and max date of the entire column, we only should be filling in the dates between the min and the max of that group, and output a dataframe with the last row in each group

Reproducible example:

x = pd.DataFrame({'dt': ['2016-01-01','2016-01-03', '2016-01-04','2016-01-01','2016-01-01','2016-01-04']
                    ,'amount': [10.0,30.0,40.0,78.0,80.0,82.0]
                    , 'sub_id': [1,1,1,2,2,2]
                    })

Visually:

            dt   sub_id   amount
0   2016-01-01        1     10.0
1   2016-01-03        1     30.0
2   2016-01-04        1     40.0
3   2017-01-01        2     78.0
4   2017-01-01        2     80.0
5   2017-01-04        2     82.0

Output I need:

            dt   sub_id   amount
0   2016-01-01        1     10.0
1   2016-01-02        1     10.0
2   2016-01-03        1     30.0
3   2016-01-04        1     40.0
4   2017-01-01        2     80.0
5   2017-01-02        2     80.0
6   2017-01-03        2     80.0
7   2017-01-04        2     82.0

We are grouping by dt and sub_id. As you can see, in sub_id=1, a row was added for 2016-01-02 and amount was imputed at 10.0 as the previous row was 10.0 (Assume data is sorted beforehand to enable this). For sub_id=2 row was added for 2017-01-02 and 2017-01-03 and amount is 80.0 as that was the last row before this date. The first row for 2017-01-01 was also deleted because we just want to keep the last row for each date and sub_id.

Looking for the most efficient way to do this as the real data has millions of rows. I have a current method using lambda functions and applying them across groups of sub_id but I feel like we could do better.

Thanks!

like image 784
user4505419 Avatar asked Jan 02 '23 08:01

user4505419


1 Answers

Getting the date right of course:

x.dt = pd.to_datetime(x.dt)

Then this:

cols = ['dt', 'sub_id']

pd.concat([
    d.asfreq('D').ffill(downcast='infer')
    for _, d in x.drop_duplicates(cols, keep='last')
                 .set_index('dt').groupby('sub_id')
]).reset_index()

          dt  amount  sub_id
0 2016-01-01      10       1
1 2016-01-02      10       1
2 2016-01-03      30       1
3 2016-01-04      40       1
4 2016-01-01      80       2
5 2016-01-02      80       2
6 2016-01-03      80       2
7 2016-01-04      82       2
like image 66
piRSquared Avatar answered Jan 05 '23 17:01

piRSquared