If I have a waveform x
such as
x = [math.sin(W*t + Ph) for t in range(16)]
with arbitrary W
and Ph
, and I calculate its (Real) FFT f
with
f = numpy.fft.rfft(x)
I can get the original x
with
numpy.fft.irfft(f)
Now, what if I need to extend the range of the recovered waveform a number of samples to the left and to the right? I.e. a waveform y
such that len(y) == 48
, y[16:32] == x
and y[0:16], y[32:48]
are the periodic extensions of the original waveform.
In other words, if the FFT assumes its input is an infinite function f(t)
sampled over t = 0, 1, ... N-1
, how can I recover the values of f(t)
for t<0
and t>=N
?
Note: I used a perfect sine wave as an example, but in practice x
could be anything: arbitrary signals such as x = range(16)
or x = np.random.rand(16)
, or a segment of any length taken from a random .wav
file.
Now, what if I need to extend the range of the recovered waveform a number of samples to the left and to the right? I.e. a waveform y such that len(y) == 48, y[16:32] == x and y[0:16], y[32:48] are the periodic extensions of the original waveform.
The periodic extension are also just x because it's the periodic extension.
In other words, if the FFT assumes its input is an infinite function f(t) sampled over t = 0, 1, ... N-1, how can I recover the values of f(t) for t<0 and t>=N?
The "N-point FFT assumes" that your signal is periodic with a periodicity of N. That's because all the harmonic base functions your block is decomposed into are periodic in the way that the previous N and succeding N samples are just a copy of the main N samples.
If you allow any value for W
your input sinusoid won't be periodic with periodicity of N. But that does not stop the FFT function from decomposing it into a sum of many periodic sinusiods. And the sum of periodic sinusoids with periodicity of N will also have a periodicity of N.
Clearly, you have to rethink the problem.
Maybe you could make use of linear prediction. Compute a couple of linear prediction coefficients based on your fragment's windowed auto-correlation and the Levinson-Durbin recursion and extrapolate using those prediction coefficients. However, for a stable prediction filter, the prediction will converge to zero and the speed of convergence depends on what kind of signal you have. The perfect linear prediction coefficients for white noise, for example, are all zero. In that case you would "extrapolate" zeros to the left and the right. But there's not much you can do about it. If you have white noise, there is just no information in your fragment about surrounding samples because all the samples are independent (that's what white noise is about).
This kind of linear prediction is actually able to predict sinusoid samples perfectly. So, if your input is sin(W*t+p) for arbitrary W and p you will only need linear prediction with order two. For more complex signals I suggest an order of 10 or 16.
The following examples should give you a good idea of how to go about it:
>>> x1 = np.random.rand(4)
>>> x2 = np.concatenate((x1, x1))
>>> x3 = np.concatenate((x1, x1, x1))
>>> np.fft.rfft(x1)
array([ 2.30410617+0.j , -0.89574460-0.26838271j, -0.26468792+0.j ])
>>> np.fft.rfft(x2)
array([ 4.60821233+0.j , 0.00000000+0.j ,
-1.79148921-0.53676542j, 0.00000000+0.j , -0.52937585+0.j ])
>>> np.fft.rfft(x3)
array([ 6.91231850+0.j , 0.00000000+0.j ,
0.00000000+0.j , -2.68723381-0.80514813j,
0.00000000+0.j , 0.00000000+0.j , -0.79406377+0.j ])
Of course the easiest way to get three periods is to concatenate 3 copies of the inverse FFT in the time domain:
np.concatenate((np.fft.irfft(f),) * 3)
But if you want or have to do this in the frequency domain, you can do the following:
>>> a = np.arange(4)
>>> f = np.fft.rfft(a)
>>> n = 3
>>> ext_f = np.zeros(((len(f) - 1) * n + 1,), dtype=f.dtype)
>>> ext_f[::n] = f * n
>>> np.fft.irfft(ext_f)
array([ 0., 1., 2., 3., 0., 1., 2., 3., 0., 1., 2., 3.])
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With