Numpy can be "linked/compiled" against different BLAS implementations (MKL, ACML, ATLAS, GotoBlas, etc). That's not always straightforward to configure but it is possible.
Is it also possible to "link/compile" numpy against NVIDIA's CUBLAS implementation?
I couldn't find any resources in the web and before I spend too much time trying it I wanted to make sure that it possible at all.
The cuBLAS Library provides a GPU-accelerated implementation of the basic linear algebra subroutines (BLAS). cuBLAS accelerates AI and HPC applications with drop-in industry standard BLAS APIs highly optimized for NVIDIA GPUs.
In a word: no, you can't do that.
There is a rather good scikit which provides access to CUBLAS from scipy called scikits.cuda
which is built on top of PyCUDA. PyCUDA provides a numpy.ndarray
like class which seamlessly allows manipulation of numpy arrays in GPU memory with CUDA. So you can use CUBLAS and CUDA with numpy, but you can't just link against CUBLAS and expect it to work.
There is also a commercial library that provides numpy and cublas like functionality and which has a Python interface or bindings, but I will leave it to one of their shills to fill you in on that.
here is another possibility :
http://www.cs.toronto.edu/~tijmen/gnumpy.html
this is basically a gnumpy + cudamat environment which can be used to harness a GPU. also the same code can be run without the gpu using npmat. refer to the link above to download all these files.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With