Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Fast Iteration of numpy arrays

I'm new to python and I'm trying to do some basic signal-processing stuff and I'm having a serious performance problem. Is there a python trick for doing this in a vectorized manner? Basically I'm trying to implement a 1st order filter, but where the filter characteristics may change from one sample to the next. If it were just one filter I would use numpy.signal.lfilter(), but it's a bit trickier. Here's the snippet of code that goes very slowly:

#filter state
state = 0

#perform filtering
for sample in amplitude:
    if( sample == 1.0 ): #attack filter
        sample = (1.0 - att_coeff) * sample + att_coeff * state
    else: #release filter
        sample = (1.0 - rel_coeff) * sample + rel_coeff * state

    state = sample
like image 796
Stefan Sullivan Avatar asked Sep 15 '15 23:09

Stefan Sullivan


People also ask

Is it faster to iterate through NumPy array?

The following code multiplies each element of an array with a corresponding element in another array. Finally, we sum up all the individual products. Once again, the NumPy version was about 100 times faster than iterating over a list.

How can I make NumPy array faster?

By explicitly declaring the "ndarray" data type, your array processing can be 1250x faster. This tutorial will show you how to speed up the processing of NumPy arrays using Cython. By explicitly specifying the data types of variables in Python, Cython can give drastic speed increases at runtime.

Does NumPy vectorize fast?

Again, some have observed vectorize to be faster than normal for loops, but even the NumPy documentation states: “The vectorize function is provided primarily for convenience, not for performance. The implementation is essentially a for loop.”

What is faster than NumPy?

pandas provides a bunch of C or Cython optimized functions that can be faster than the NumPy equivalent function (e.g. reading text from text files).


1 Answers

You could consider using one of the Python-to-native-code converters, such as Cython, Numba or Pythran.

For instance, running your original code with timeit gives me:

$ python -m timeit -s 'from co import co; import numpy as np; a = np.random.random(100000)' 'co(a, .5, .7)'
10 loops, best of 3: 120 msec per loop

while annotating it with Pythran, as in:

#pythran export co(float[], float, float)
def co(amplitude, att_coeff, rel_coeff):
    # filter state
    state = 0

    # perform filtering
    for sample in amplitude:
        if sample == 1.0: # attack filter
            state = (1.0 - att_coeff) * sample + att_coeff * state
        else:             # release filter
            state = (1.0 - rel_coeff) * sample + rel_coeff * state
    return state

and compiling it with

$ pythran co.py

gives me:

$ python -m timeit -s 'from co import co; import numpy as np; a = np.random.random(100000)' 'co(a, .5, .7)' 
1000 loops, best of 3: 253 usec per loop

That's roughly a x470 speedup! I expect Numba and Cython to give similar speedups.

like image 185
serge-sans-paille Avatar answered Oct 07 '22 22:10

serge-sans-paille