I have the following data:
data = [0.1, 0.2, 0.3, 0.4 , 0.5, 0.6, 0.7, 0.8, 0.5, 0.2, 0.1, -0.1,
-0.2, -0.3, -0.4, -0.5, -0.6, -0.7, -0.9, -1.2, -0.1, -0.7]
Every time the data point changes more than the step size, I want to record it. If to does not I want to keep the old one, until the cumulative change is at least as much as the step size. I achieve this iteratively like so:
import pandas as pd
from copy import deepcopy
import numpy as np
step = 0.5
df_steps = pd.Series(data)
df = df_steps.copy()
today = None
yesterday = None
for index, value in df_steps.iteritems():
today = deepcopy(index)
if today is not None and yesterday is not None:
if abs(df.loc[today] - df_steps.loc[yesterday]) > step:
df_steps.loc[today] = df.loc[today]
else:
df_steps.loc[today] = df_steps.loc[yesterday]
yesterday = deepcopy(today)
My final result is:
[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.7, 0.7, 0.7, 0.7, 0.1, 0.1, 0.1, 0.1, 0.1, -0.5, -0.5, -0.5, -0.5, -1.2, -0.1, -0.7]
Problem and Question
The problem is that this is achieved iteratively (I agree with the second answer here). My question is how can I achieve the same in a vectorized fashion?
Attempts
My attempt is the following, but it does not match the result:
(df.diff().cumsum().replace(np.nan, 0) / step).astype(int)
Since a purely vectorised approach does not seem trivial, we can go with numba to compile the code down to C-level, and hence have a loopy but very per formant approach. Here's one way using numba's nopython mode:
from numba import njit, float64
@njit('float64[:](float64[:], float32)')
def set_at_cum_change(a, step):
out = np.empty(len(a), dtype=float64)
prev = a[0]
out[0] = a[0]
for i in range(1,len(a)):
current = a[i]
if np.abs(current-prev) > step:
out[i] = current
prev = current
else:
out[i] = out[i-1]
return out
Which testing on the same array, gives:
data = np.array([0.1, 0.2, 0.3, 0.4 , 0.5, 0.6, 0.7, 0.8, 0.5, 0.2, 0.1, -0.1,
-0.2, -0.3, -0.4, -0.5, -0.6, -0.7, -0.9, -1.2, -0.1, -0.7])
out = set_at_cum_change(data,step= 0.5)
print(out)
array([ 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.7, 0.7, 0.7, 0.7, 0.1,
0.1, 0.1, 0.1, 0.1, -0.5, -0.5, -0.5, -0.5, -1.2, -0.1, -0.7])
If we check the timings, we see a huge 110000x speedup with the numba approach on a 22000 length array. This not only shows that numba is a great approach in these cases, but it also makes clear that using panda's iterrows/iteritems is almost always a bad idea:
def op(data):
step = 0.5
df_steps = pd.Series(data)
df = df_steps.copy()
today = None
yesterday = None
for index, value in df_steps.iteritems():
today = deepcopy(index)
if today is not None and yesterday is not None:
if abs(df.loc[today] - df_steps.loc[yesterday]) > step:
df_steps.loc[today] = df.loc[today]
else:
df_steps.loc[today] = df_steps.loc[yesterday]
yesterday = deepcopy(today)
return df_steps.to_numpy()
def fn(step):
current = float('inf')
i = yield
while True:
if abs(current - i) > step:
current = i
i = yield i
else:
i = yield current
def andrej(data):
df = pd.DataFrame({'data': data})
f = fn(0.5)
next(f)
df['new_data'] = df['data'].apply(lambda x: f.send(x))
data_large = np.tile(data, 1_000)
print(data_large.shape)
# (22000,)
np.allclose(op(data_large), set_at_cum_change(data_large, step=0.5))
# True
%timeit op(data_large)
# 5.78 s ± 329 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit andrej(data_large)
# 13.6 ms ± 1.53 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit set_at_cum_change(data_large, step=0.5)
# 50.4 µs ± 1.8 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With