Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between @cuda.jit and @jit(target='gpu')

Tags:

cuda

numba

I have a question on working with Python CUDA libraries from Continuum's Accelerate and numba packages. Is using the decorator @jit with target = gpu the same as @cuda.jit?

like image 508
Perry Holen Avatar asked Mar 09 '16 11:03

Perry Holen


People also ask

What is Cuda JIT?

The CUDA JIT is a low-level entry point to the CUDA features in Numba. It translates Python functions into PTX code which execute on the CUDA hardware. The jit decorator is applied to Python functions written in our Python dialect for CUDA.

What does Cuda () do in Python?

CUDA Python provides uniform APIs and bindings for inclusion into existing toolkits and libraries to simplify GPU-based parallel processing for HPC, data science, and AI. CuPy is a NumPy/SciPy compatible Array library from Preferred Networks, for GPU-accelerated computing with Python.

Does Numba use GPU?

Numba supports CUDA GPU programming by directly compiling a restricted subset of Python code into CUDA kernels and device functions following the CUDA execution model. Kernels written in Numba appear to have direct access to NumPy arrays. NumPy arrays are transferred between the CPU and the GPU automatically.

Why Numba is used in Python?

Numba reads the Python bytecode for a decorated function and combines this with information about the types of the input arguments to the function. It analyzes and optimizes your code, and finally uses the LLVM compiler library to generate a machine code version of your function, tailored to your CPU capabilities.


1 Answers

No, they are not the same, although the eventual compilation path into PTX into assembler is. The @jit decorator is the general compiler path, which can be optionally steered onto a CUDA device. The @cuda.jit decorator is effectively the low level Python CUDA kernel dialect which Continuum Analytics have developed. So you get support for CUDA built-in variables like threadIdx and memory space specifiers like __shared__ in @cuda.jit.

If you want to write a CUDA kernel in Python and compile and run it, use @cuda.jit. Otherwise, if you want to accelerate an existing piece of Python use @jit with a CUDA target.

like image 134
talonmies Avatar answered Oct 03 '22 07:10

talonmies