Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Numba CUDA shared memory size at runtime?

Tags:

python

cuda

numba

In CUDA C it's straightforward to define a shared memory of size specified at runtime. How can I do this with Numba/NumbaPro CUDA?

What I've done so far has only resulted in errors with the message "Argument 'shape' must be a constant".

EDIT: Just to clarify, what I want is an equivalent to the following in C CUDA (example taken and adapted from here:

__global__ void dynamicReverse(int *d, int n)
{
  extern __shared__ int s[];

  // some work in the kernel with the shared memory
}

int main(void)
{
  const int n = 64;
  int a[n];

  // run dynamic shared memory version
  dynamicReverse<<<1,n,n*sizeof(int)>>>(a, n);

}
like image 903
diogoaos Avatar asked May 28 '15 15:05

diogoaos


1 Answers

I found the solution (through the very helpful Continuum Analytics user support). What we do is define the shared memory as we'd normally do but set the shape to 0. Then, to define the size of the shared array we have to give it as the fourth parameter (after the stream identifier) to the kernel. E.g.:

@cuda.autojit
def myKernel(a):
   sm = cuda.shared.array(shape=0,dtype=numba.int32)

   # do stuff

arga = np.arange(512)
grid = 1
block = 512
stream = 0
sm_size = arga.size * arga.dtype.itemsize
myKernel[grid,block,stream,sm_size](arga)
like image 62
diogoaos Avatar answered Nov 14 '22 01:11

diogoaos