I'm trying to use numba to do np.diff on my GPU.
Here is the script I use ;
import numpy as np
import numba
@numba.vectorize(["float32(float32, float32)"], target='cuda')
def vector_diff_axis0(a, b):
return a + b
def my_diff(A, axis=0):
if (axis == 0):
return vector_diff_axis0(A[1:], A[:-1])
if (axis == 1):
return vector_diff_axis0(A[:,1:], A[:,:-1])
A = np.matrix([
[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9],
[9, 8, 7, 6, 5],
[4, 3, 2, 1, 0],
[0, 2, 4, 6, 8]
], dtype='float32')
C = my_diff(A, axis=1)
print (str(C))
And here is the error I get ;
TypeError: No matching version. GPU ufunc requires array arguments
to have the exact types. This behaves like regular ufunc with casting='no'.
Does anybody knows the reason for this?
PS : I used this video to do my script ; https://youtu.be/jKV1m8APttU?t=388
EDIT : Thanks for the fast answers!
I added the dtype='float32' in np.matrix but now I have this error ; Known signatures: * (float32, float32) -> float32 File "", line 5 [1] During: resolving callee type: Function( signature=(float32, float32) -> float32>) [2] During: typing of call at (5)
I also tried to use float32[:] into the signature but it doesn't work and in the video I followed they don't do that
The dtype of your matrix will be int32
, which does not match the signature of vector_diff_axis0
since it requires float32
. You need to make the matrix float32
, i.e. pass the argument dtype='float32'
when you call np.matrix
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With