Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Fastest (or most elegant) way of passing constant arguments to a CUDA kernel

Tags:

c++

cuda

Lets say I want a CUDA kernel that needs to do lots of stuff, but there are dome parameters that are constant to all the kernels. this arguments are passed to the main program as an input, so they can not be defined in a #DEFINE.

The kernel will run multiple times (around 65K) and it needs those parameters (and some other inputs) to do its maths.

My question is: whats the fastest (or else, the most elegant) way of passing these constants to the kernels?

The constants are 2 or 3 element length float* or int* arrays. They will be around 5~10 of these.


toy example: 2 constants const1 and const2

__global__ void kernelToyExample(int inputdata, ?????){
        value=inputdata*const1[0]+const2[1]/const1[2];
}

is it better

__global__ void kernelToyExample(int inputdata, float* const1, float* const2){
        value=inputdata*const1[0]+const2[1]/const1[2];
}

or

__global__ void kernelToyExample(int inputdata, float const1x, float const1y, float const1z, float const2x, float const2y){
        value=inputdata*const1x+const2y/const1z;
}

or maybe declare them in some global read only memory and let the kernels read from there? If so, L1, L2, global? Which one?

Is there a better way I don't know of?

Running on a Tesla K40.

like image 422
Ander Biguri Avatar asked Jul 22 '15 16:07

Ander Biguri


People also ask

What is constant memory in CUDA?

The constant memory in CUDA is a dedicated memory space of 65536 bytes. It is dedicated because it has some special features like cache and broadcasting. The constant memory space resides in device memory and is cached in the constant cache mentioned in Compute Capability 1.

What is a CUDA kernel?

Figure 1 shows that the CUDA kernel is a function that gets executed on GPU. The parallel portion of your applications is executed K times in parallel by K different CUDA threads, as opposed to only one time like regular C/C++ functions. Figure 1. The kernel is a function executed on the GPU.

What Is syntax for kernel launch?

CUDA kernels are launched with this syntax (at least in the runtime API) mykernel<<<blocks, threads, shared_mem, stream>>>(args);

How do I start CUDA kernel?

In order to launch a CUDA kernel we need to specify the block dimension and the grid dimension from the host code. I'll consider the same Hello World! code considered in the previous article. In the above code, to launch the CUDA kernel two 1's are initialised between the angle brackets.


1 Answers

Just pass them by value. The compiler will automagically put them in the optimal place to facilitate cached broadcast to all threads in each block - either shared memory in compute capability 1.x devices, or constant memory/constant cache in compute capability >= 2.0 devices.

For example, if you had a long list of arguments to pass to the kernel, a struct passed by value is a clean way to go:

struct arglist {
    float magicfloat_1;
    float magicfloat_2;
    //......
    float magicfloat_19;
    int magicint1;
    //......
};

__global__ void kernel(...., const arglist args)
{
    // you get the idea
}

[standard disclaimer: written in browser, not real code, caveat emptor]

If it turned out one of your magicint actually only took one of a small number of values which you know beforehand, then templating is an extremely powerful tool:

template<int magiconstant1>
__global__ void kernel(....)
{
    for(int i=0; i < magconstant1; ++i) {
       // .....
    }
}

template kernel<3>(....);
template kernel<4>(....);
template kernel<5>(....);

The compiler is smart enough to recognise magconstant makes the loop trip known at compile time and will automatically unroll the loop for you. Templating is a very powerful technique for building fast, flexible codebases and you would be well advised to accustom yourself with it if you haven't already done so.

like image 177
3 revs Avatar answered Oct 24 '22 04:10

3 revs