Say I have some_data
of shape (1, n)
. I have new incoming_data
of shape (1, n±x)
, where x is some positive integer much smaller than n
. I would like to squeeze or stretch incoming_data
such that it is of the same length as n
. How might this be done, using the SciPy stack?
Here's an example of what I'm trying to accomplish.
# Stretch arr2 to arr1's shape while "filling in" interpolated value
arr1 = np.array([1, 5, 2, 3, 7, 2, 1])
arr2 = np.array([1, 5, 2, 3, 7, 1])
result
> np.array([1, 5, 2, 3, 6.x, 2.x 1]) # of shape (arr1.shape)
As another example:
# Squeeze arr2 to arr1's shape while placing interpolated value.
arr1 = np.array([1, 5, 2, 3, 7, 2, 1])
arr2 = np.array([1, 5, 2, 3, 4, 7, 2, 1])
result
> np.array([1, 5, 2, 3.x, 7.x, 2.x, 1]) # of shape (arr1.shape)
Arrays can be interpolated by means of a one-to-one mapping given by (7) f : A S → A ¯ S , where is a sector.
With the help of Numpy numpy. resize(), we can resize the size of an array. Array can be of any shape but to resize it we just need the size i.e (2, 2), (2, 3) and many more. During resizing numpy append zeros if values at a particular place is missing.
The interp1d() function of scipy. interpolate package is used to interpolate a 1-D function. It takes arrays of values such as x and y to approximate some function y = f(x) and then uses interpolation to find the value of new points.
You can implement this simple compression or stretching of your data using scipy.interpolate.interp1d
. I'm not saying it necessarily makes sense (it makes a huge difference what kind of interpolation you're using, and you'll generally only get a reasonable result if you can correctly guess the behaviour of the underlying function), but you can do it.
The idea is to interpolate your original array over its indices as x
values, then perform interpolation with a sparser x
mesh, while keeping its end points the same. So essentially you have to do a continuum approximation to your discrete data, and resample that at the necessary points:
import numpy as np
import scipy.interpolate as interp
import matplotlib.pyplot as plt
arr_ref = np.array([1, 5, 2, 3, 7, 1]) # shape (6,), reference
arr1 = np.array([1, 5, 2, 3, 7, 2, 1]) # shape (7,), to "compress"
arr2 = np.array([1, 5, 2, 7, 1]) # shape (5,), to "stretch"
arr1_interp = interp.interp1d(np.arange(arr1.size),arr1)
arr1_compress = arr1_interp(np.linspace(0,arr1.size-1,arr_ref.size))
arr2_interp = interp.interp1d(np.arange(arr2.size),arr2)
arr2_stretch = arr2_interp(np.linspace(0,arr2.size-1,arr_ref.size))
# plot the examples, assuming same x_min, x_max for all data
xmin,xmax = 0,1
fig,(ax1,ax2) = plt.subplots(ncols=2)
ax1.plot(np.linspace(xmin,xmax,arr1.size),arr1,'bo-',
np.linspace(xmin,xmax,arr1_compress.size),arr1_compress,'rs')
ax2.plot(np.linspace(xmin,xmax,arr2.size),arr2,'bo-',
np.linspace(xmin,xmax,arr2_stretch.size),arr2_stretch,'rs')
ax1.set_title('"compress"')
ax2.set_title('"stretch"')
The resulting plot:
In the plots, blue circles are the original data points, and red squares are the interpolated ones (these overlap at the boundaries). As you can see, what I called compressing and stretching is actually upsampling and downsampling of an underlying (linear, by default) function. This is why I said you must be very careful with interpolation: you can get very wrong results if your expectations don't match your data.
There's another package that works very well for upsampling and downsampling: resampy
. It has a simpler command than scipy.interpolate.interp1d
but only uses a single interpolation function. As @Andras Deak said, you have to be careful in choosing interpolation functions.
import numpy as np
import resampy
from matplotlib import pyplot as plt
x_mesh = np.linspace(0,1,10)
short_arr = np.sin(x_mesh*2*np.pi)
plt.plot(short_arr)
interp_arr = resampy.resample(short_arr, 20, 100)
plt.plot(interp_arr)
Two words of caution:
resampy
uses a "band-limited sinc interpolation". Check the documentation for more info. It works best if your array originally came from data with local frequency components, e.g. sound, images, and other time-series data. It's used in some of the tensorflow examples on audio, which is what I use. I'm not sure whether your example array was small for demonstration purposes, but if that truly is the size of your array, interpolating may be bad whatever method you use, linear, spline, or otherwise.
Your examples demonstrated more than interpolation. It seems you found a portion of the arrays that matched (e.g. [1,5,2,3]
) then interpolated the rest. Depending on whether you want to match the beginning of the array or an arbitrary number of patches, you may be asking for a two methods: one to identify the correct portions of an array to interpolate, and one to interpolate those portions. If that's the case, look at numpy.isin
for a basic method or levenshtein distance for more generally matching a set of substrings.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With