Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

define variable size on array in local memory, using CUDA

Tags:

arrays

c

cuda

Is it somewhat possible to make a list, array, something in a device function with the size of the list/array beeing a parameter in the call… or a global variable that's initialized at call time?

I would like something like one of these list to work:

unsigned int size1;

__device__ void function(int size2) {

    int list1[size1];
    int list2[size2];
}

Is it possible to do something smart to make something like this work?

like image 257
SenfMeister Avatar asked Jul 10 '10 19:07

SenfMeister


1 Answers

There is 1 way to allocate dynamic amount of shared memory - to use third launch kernel parameter:

__global__ void kernel (int * arr) 
{
    extern __shared__ int buf []; // size is not stated
    // copy data to shared mem:
    buf[threadIdx.x] = arr[blockIdx.x * blockDim.x + threadIdx.x];
    // . . . 
}
// . . . 
// launch kernel, set size of shared mem in bytes (k elements in buf):
kernel<<<grid, threads, k * sizeof(int)>>> (arr);

There is a hack for many arrays:

__device__ void function(int * a, int * b, int k) // k elements in first list
{
    extern __shared__ int list1 [];
    extern __shared__ int list2 []; // list2 points to the same point as list1 does

    list1 [threadIdx.x] = a[blockIdx.x * blockDim.x + threadIdx.x];
    list2 [k + threadIdx.x] = b[blockIdx.x * blockDim.x + threadIdx.x];
    // . . .
}

You must take into account: memory allocated to all block.

like image 145
KoppeKTop Avatar answered Oct 19 '22 14:10

KoppeKTop