Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

how can a __global__ function RETURN a value or BREAK out like C/C++ does

Tags:

cuda

Recently I've been doing string comparing jobs on CUDA, and i wonder how can a __global__ function return a value when it finds the exact string that I'm looking for.

I mean, i need the __global__ function which contains a great amount of threads to find a certain string among a big big string-pool simultaneously, and i hope that once the exact string is caught, the __global__ function can stop all the threads and return back to the main function, and tells me "he did it"!

I'm using CUDA C. How can I possibly achieve this?

like image 765
Kai Cui Avatar asked Sep 20 '12 03:09

Kai Cui


1 Answers

There is no way in CUDA (or on NVIDIA GPUs) for one thread to interrupt execution of all running threads. You can't have immediate exit of the kernel as soon as a result is found, it's just not possible today.

But you can have all threads exit as soon as possible after one thread finds a result. Here's a model of how you would do that.

__global___ void kernel(volatile bool *found, ...) 
{
    while (!(*found) && workLeftToDo()) {

       bool iFoundIt = do_some_work(...); // see notes below

       if (iFoundIt) *found = true;
    }
}

Some notes on this.

  1. Note the use of volatile. This is important.
  2. Make sure you initialize found—which must be a device pointer—to false before launching the kernel!
  3. Threads will not exit instantly when another thread updates found. They will exit only the next time they return to the top of the while loop.
  4. How you implement do_some_work matters. If it is too much work (or too variable), then the delay to exit after a result is found will be long (or variable). If it is too little work, then your threads will be spending most of their time checking found rather than doing useful work.
  5. do_some_work is also responsible for allocating tasks (i.e. computing/incrementing indices), and how you do that is problem specific.
  6. If the number of blocks you launch is much larger than the maximum occupancy of the kernel on the present GPU, and a match is not found in the first running "wave" of thread blocks, then this kernel (and the one below) can deadlock. If a match is found in the first wave, then later blocks will only run after found == true, which means they will launch, then exit immediately. The solution is to launch only as many blocks as can be resident simultaneously (aka "maximal launch"), and update your task allocation accordingly.
  7. If the number of tasks is relatively small, you can replace the while with an if and run just enough threads to cover the number of tasks. Then there is no chance for deadlock (but the first part of the previous point applies).
  8. workLeftToDo() is problem-specific, but it would return false when there is no work left to do, so that we don't deadlock in the case that no match is found.

Now, the above may result in excessive partition camping (all threads banging on the same memory), especially on older architectures without L1 cache. So you might want to write a slightly more complicated version, using a shared status per block.

__global___ void kernel(volatile bool *found, ...) 
{
    volatile __shared__ bool someoneFoundIt;

    // initialize shared status
    if (threadIdx.x == 0) someoneFoundIt = *found;
    __syncthreads();

    while(!someoneFoundIt && workLeftToDo()) {

       bool iFoundIt = do_some_work(...); 

       // if I found it, tell everyone they can exit
       if (iFoundIt) { someoneFoundIt = true; *found = true; }

       // if someone in another block found it, tell 
       // everyone in my block they can exit
       if (threadIdx.x == 0 && *found) someoneFoundIt = true;

       __syncthreads();
    }
}

This way, one thread per block polls the global variable, and only threads that find a match ever write to it, so global memory traffic is minimized.

Aside: __global__ functions are void because it's difficult to define how to return values from 1000s of threads into a single CPU thread. It is trivial for the user to contrive a return array in device or zero-copy memory which suits his purpose, but difficult to make a generic mechanism.

Disclaimer: Code written in browser, untested, unverified.

like image 110
harrism Avatar answered Oct 23 '22 01:10

harrism