Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Proper way to write kernel functions in CUDA?

Tags:

c++

c

cuda

labview

I am just about to embark on converting a program I wrote into CUDA to hopefully increase processing speed.

Now obviously my old program executes many functions one after the other, and I have separated these functions in my main program and call each one in order.

void main ()
{
  *initialization of variables*
  function1()
  function2()
  function3()
  print result;
}

These functions are inherently serial, as funtion2 is dependent on the results of funtion1.

Alright, so now I want to convert these functions into kernels, and run the tasks in the functions in parallel.

Is it as simple as rewriting each function in a parallel way, and then in my main program, call each kernel one after the other? Is this slower than it needs to be? For example can I have my GPU directly execute the next parallel operation without going back to the CPU to initialize the next kernel?

Obviously I will keep all run time variables on the GPU memory to limit the amount of data transfer going on, so should I even worry about the time it takes between kernel calls?

I hope this question is clear, if not please ask me to elaborate. Thanks.

And here is an extra question so that I can check my sanity. Ultimately this program's input is a video file, and through the different functions, each frame will lead to a result. My plan is to grab multiple frames at a time (say 8 unique frames) and then divide the total number of blocks I have among these 8 frames, and then the multiple threads in the blocks will be doing even more parallel operations on the image data, such as vector addition, Fourier transforms, etc.
Is this the right way to approach the problem?

like image 969
Shawn Tabrizi Avatar asked Jul 18 '12 19:07

Shawn Tabrizi


Video Answer


2 Answers

There are some cases where you can get programs to run at the full potential speed on the GPU with very little porting work from a plain CPU version, and this might be one of them.

If it's possible for you to have a function like this:

void process_single_video_frame(void* part_of_frame)
{
  // initialize variables
  ...
  intermediate_result_1 = function1(part_of_frame);
  intermediate_result_2 = function2(intermediate_result_1);
  intermediate_result_3 = function3(intermediate_result_2);
  store_results(intermediate_result_3);
}

and you can process many part_of_frames at the same time. Say, a few thousand,

and function1(), function2() and function3() go through pretty much the same code paths (that is, the program flow does not depend heavily on the contents of the frame),

then, local memory may do all the work for you. Local memory is a type of memory that is stored in global memory. It is different from global memory in a subtle, yet profound way... The memory is simply interleaved in such a way that adjacent threads will access adjacent 32 bit words, enabling the memory access to be fully coalesced if the threads all read from the same location of their local memory arrays.

The flow of your program would be that you start out by copying part_of_frame to a local array and prepare other local arrays for intermediate results. You then pass pointers to the local arrays between the various functions in your code.

Some pseudocode:

const int size_of_one_frame_part = 1000;

__global__ void my_kernel(int* all_parts_of_frames) {
    int i = blockIdx.x * blockDim.x + threadIdx.x;
    int my_local_array[size_of_one_frame_part];
    memcpy(my_local_array, all_parts_of_frames + i * size_of_one_frame_part);
    int local_intermediate_1[100];
    function1(local_intermediate_1, my_local_array);
    ...
}

__device__ void function1(int* dst, int* src) {
   ...
}

In summary, this approach may let you use your CPU functions pretty much unchanged, as the parallelism does not come from creating parallelized versions of your functions, but instead by running the entire chain of functions in parallel. And this again is made possible by the hardware support for interleaving the memory in local arrays.

Notes:

  • The initial copy of the part_of_frame from global to local memory is not coalesced, but hopefully, you will have enough calculations to hide that.

  • On devices of compute capability <= 1.3, there is only 16KiB of local memory available per thread, which may not be enough for your part_of_frame and the other intermediate data. But on compute capability >= 2.0, this has bee expanded to 512KiB, which should be plenty.

like image 67
Roger Dahl Avatar answered Oct 06 '22 02:10

Roger Dahl


Answering some of your questions:

Calling a kernel isn't that expensive, so don't be afraid of the program flow returning from the GPU to the CPU. As long as you keep your results in the GPU memory, there won't be much overhead. If you want to, you can make a kernel that simply calls other device functions in a sequence. AFAIK this will be harder to debug and profile, I am not sure if one can even profile functions called by a kernel.

Regarding parallelization:

I think any idea that allows you to run computation on multiple data streams is good. The more your code resembles a shader, the better (meaning it will have the required characteristics to run fast on a gpu). The idea with multiple frames is nice. Some hints about it: minimize synchronization as much as possible, access memory as rarely as possible, try to increase the ratio of computation time to IO requests time, make use of gpu registers / shared memory, prefer many-read-from-one to one-writes-to-many designs.

like image 45
emesx Avatar answered Oct 06 '22 02:10

emesx