Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the difference between @jit and @vectorize in numba?

When should I use @vectorize?

I tried @jit and show that part of the code below,

from numba import jit

@jit
def kma(g,temp): 
    k=np.exp(-(g+np.abs(g))/(2*temp))   
    return k

but my code didn't accelerate the algorithm. Why?

like image 913
kinder chen Avatar asked Nov 29 '17 19:11

kinder chen


People also ask

What is vectorize in Numba?

Using vectorize(), you write your function as operating over input scalars, rather than arrays. Numba will generate the surrounding loop (or kernel) allowing efficient iteration over the actual inputs. The vectorize() decorator needs you to pass a list of signatures you want to support.

What is Nopython in Numba?

Numba has two compilation modes: nopython mode and object mode. In nopython mode, the Numba compiler will generate code that does not access the Python C API. This mode produces the highest performance code, but requires that the native types of all values in the function can be inferred.

Is Numba faster than NumPy?

For the uninitiated Numba is an open-source JIT compiler that translates a subset of Python/NumPy code into an optimized machine code using the LLVM compiler library. In short Numba makes Python/NumPy code runs faster.

Does Numba work with CPU?

Numba supports Intel and AMD x86, POWER8/9, and ARM CPUs (including Apple M1), NVIDIA GPUs, Python 3.7-3.10, as well as Windows/macOS/Linux. Precompiled Numba binaries for most systems are available as conda packages and pip-installable wheels.


1 Answers

@vectorize is used to write an expression that can be applied one element at a time (scalars) to an array. The @jit decorator is more general and can work on any type of calculation.

There is a detailed discussion of the other benefits in the docs:

http://numba.pydata.org/numba-doc/latest/user/vectorize.html

You might ask yourself, “why would I go through this instead of compiling a simple iteration loop using the @jit decorator?”. The answer is that NumPy ufuncs automatically get other features such as reduction, accumulation or broadcasting.

The reason why your code isn't being sped up (I see almost identical performance between jitted and non-jitted code), is that the operation you're performing is already being entirely handled by the low-level compiled code sitting behind the numpy vectorized operations.

You might get some savings if you unroll the implicit loops to avoid the creation of intermediate arrays, but typically numba really excels for operations that aren't easily vectorized in numpy.

like image 106
JoshAdel Avatar answered Nov 15 '22 11:11

JoshAdel