For a Pandas DataFrame I'm trying to get a grouped by rolling mean with a changeable window size specified on each row that's relative to a date time index.
For the following df of weekly data:
| week_start_date | material | location | quantity | window_size |
|-----------------|----------|----------|----------|-------------|
| 2019-01-28 | C | A | 870 | 1 |
| 2019-02-04 | C | A | 920 | 3 |
| 2019-02-18 | C | A | 120 | 1 |
| 2019-02-25 | C | A | 120 | 2 |
| 2019-03-04 | C | A | 120 | 1 |
| 2018-12-31 | D | A | 1200 | 8 |
| 2019-01-21 | D | A | 720 | 8 |
| 2019-01-28 | D | A | 480 | 8 |
| 2019-02-04 | D | A | 600 | 8 |
| 2019-02-11 | D | A | 720 | 8 |
| 2019-02-18 | D | A | 80 | 8 |
| 2019-02-25 | D | A | 600 | 8 |
| 2019-03-04 | D | A | 1200 | 8 |
| 2019-01-14 | E | B | 150 | 1 |
| 2019-01-28 | E | B | 1416 | 1 |
| 2019-02-04 | F | B | 1164 | 1 |
| 2019-01-28 | G | B | 11520 | 8 |
The window needs to be relative to the actual date set in week_start_date, rather than treating it like an integer index.
It needs to be grouped by material and location.
The rolling mean is for column quantity.
The window size needs to vary/change based on the value in the window_size column. This value changes over time - it represents the number of weeks back in time that quantity needs to be aggregated for.
When a row isn't available, the mean should assume that value is 0, i.e.:
when a week-dated row isn't available
mean(null, null, null, 1000) = 1000
but it should actually:
mean(0,0,0,1000)=250
However - this should only apply after the first observation has been measured.
I can get a static window of 8 weeks (56 days) using the following:
df.set_index('week_start_date').groupby(['material', 'location'])['quantity'].rolling('56D', min_periods=1).mean()
I've explored use of expanding but haven't been successful.
How can the window size be set relative to each row it reads?
# Example Data
df = pd.DataFrame({'week_start_date': ['2019-01-28','2019-02-04','2019-02-18','2019-02-25','2019-03-04','2018-12-31','2019-01-21','2019-01-28','2019-02-04','2019-02-11','2019-02-18','2019-02-25','2019-03-04','2019-01-14','2019-01-28','2019-02-04','2019-01-28'],
'material': ['C','C','C','C','C','D','D','D','D','D','D','D','D','E','E','F','G'],
'location': ['A','A','A','A','A','A','A','A','A','A','A','A','A','B','B','B','B'],
'quantity': ['870','920','120','120','120','1200','720','480','600','720','80','600','1200','150','1416','1164','11520'],
'min_of_pdt_or_8_weeks': ['1','3','1','2','1','8','8','8','8','8','8','8','8','1','3','1','8']})
# Fix formats
df['week_start_date'] = pd.to_datetime(df['week_start_date'])
df['actual_week_qty'] = df['quantity'].astype(float)
| material | location | week_start_date | quantity |
| C | A | 2019-01-28 | 870 |
| C | A | 2019-04-02 | 306.6667 |
| C | A | 2019-02-18 | 520 |
| C | A | 2019-02-25 | 386.6667 |
| D | A | 2018-12-31 | 1200 |
| D | A | 2019-01-21 | 960 |
| D | A | 2019-01-28 | 800 |
| D | A | 2019-04-02 | 600 |
| D | A | 2019-11-02 | 720 |
| D | A | 2019-02-18 | 400 |
| D | A | 2019-02-25 | 466.6667 |
| D | A | 2019-04-03 | 650 |
| E | B | 2019-01-14 | 150 |
| E | B | 2019-01-28 | 783 |
| F | B | 2019-04-02 | 1164 |
| G | B | 2019-01-28 | 11520 |
A naive way you might do this, is to do the 8 (assuming this is bounded!) calculations and merge the results:
In [11]: d = {w: df.set_index('week_start_date')
.groupby(['material', 'location'])['quantity']
.rolling(f'{7*w}D', min_periods=1)
.mean()
.reset_index(name="mean")
.assign(window_size=w)
for w in range(1, 9)}
then you can concat these DataFrames together and merge with the original, since we have the window_size column in both left and right it'll inner on that.
In [12]: pd.concat(d.values()).merge(df, how="inner")
Out[12]:
material location week_start_date mean window_size quantity
0 C A 2019-01-28 870.000000 1 870.0
1 C A 2019-02-18 520.000000 1 120.0
2 C A 2019-04-03 320.000000 1 120.0
3 E B 2019-01-14 150.000000 1 150.0
4 F B 2019-04-02 1164.000000 1 1164.0
5 C A 2019-02-25 386.666667 2 120.0
6 C A 2019-04-02 920.000000 3 920.0
7 E B 2019-01-28 783.000000 3 1416.0
8 D A 2018-12-31 1200.000000 8 1200.0
9 D A 2019-01-21 960.000000 8 720.0
10 D A 2019-01-28 800.000000 8 480.0
11 D A 2019-04-02 600.000000 8 600.0
12 D A 2019-11-02 720.000000 8 720.0
13 D A 2019-02-18 400.000000 8 80.0
14 D A 2019-02-25 466.666667 8 600.0
15 D A 2019-04-03 650.000000 8 1200.0
16 G B 2019-01-28 11520.000000 8 11520.0
Note: This assumes you've set the fillna of window_size to 8:
df.window_size = df.window_size.replace('NaN', 8).astype(int) # in your example
Further, you want to ensure you pass format to to_datetime to ensure you don't hit ambiguity, pandas may be able to do a good job here in infering it... but I wouldn't rely on it (use explicitly format='%d/%m/%Y). You want to get rid of the weird date formats as soon as you read it in, this can also be passed in read_csv (dayfirst=True) and friends.
I'm not entirely convinced this is what you want, since there is a difference between your input df and expected (e.g. there's no G B in the expected...).
Regardless, I suspect there is a single shoot way to do this, but it will depend on the sparsity of the week/material/location (if it's dense it'll be much easier, if it's sparse this may be the best bet)...
Now I think about it, you can do this completely on the material/location subDataFrame, can you simplify this problem to just be a function of that DataFrame (just week+value ignoring material/location) or will that apply be too slow?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With