Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does CUDA automatically load-balance for you?

I'm hoping for some general advice and clarification on best practices for load balancing in CUDA C, in particular:

  • If 1 thread in a warp takes longer than the other 31, will it hold up the other 31 from completing?
  • If so, will the spare processing capacity be assigned to another warp?
  • Why do we need the notion of warp and block? Seems to me a warp is just a small block of 32 threads.
  • So in general, for a given call to a kernel what do I need load balance?
    • Threads in each warp?
    • Threads in each block?
    • Threads across all blocks?

Finally, to give an example, what load balancing techniques you would use for the following function:

  1. I have a vector x0 of N points: [1, 2, 3, ..., N]
  2. I randomly select 5% of the points and log them (or some complicated function)
  3. I write the resulting vector x1 (e.g. [1, log(2), 3, 4, 5, ..., N]) to memory
  4. I repeat the above 2 operations on x1 to yield x2 (e.g. [1, log(log(2)), 3, 4, log(5), ..., N]), and then do a further 8 iterations to yield x3 ... x10
  5. I return x10

Many thanks.

like image 363
mchen Avatar asked Jan 02 '13 20:01

mchen


1 Answers

Threads are grouped into three levels that are scheduled differently. Warps utilize SIMD for higher compute density. Thread blocks utilize multithreading for latency tolerance. Grids provide independent, coarse-grained units of work for load balancing across SMs.

Threads in a warp

The hardware executes the 32 threads of a warp together. It can execute 32 instances of a single instruction with different data. If the threads take different control flow, so they are not all executing the same instruction, then some of those 32 execution resources will be idle while the instruction executes. This is called control divergence in CUDA references.

If a kernel exhibits a lot of control divergence, it may be worth redistributing work at this level. This balances work by keeping all execution resources busy within a warp. You can reassign work between threads as shown below.

// Identify which data should be processed
if (should_do_work(threadIdx.x)) {
  int tmp_index = atomicAdd(&tmp_counter, 1); 
  tmp[tmp_index] = threadIdx.x;
}
__syncthreads();

// Assign that work to the first threads in the block
if (threadIdx.x < tmp_counter) {
  int thread_index = tmp[threadIdx.x];
  do_work(thread_index); // Thread threadIdx.x does work on behalf of thread tmp[threadIdx.x]
}

Warps in a block

On an SM, the hardware schedules warps onto execution units. Some instructions take a while to complete, so the scheduler interleaves the execution of multiple warps to keep the execution units busy. If some warps are not ready to execute, they are skipped with no performance penalty.

There is usually no need for load balancing at this level. Simply ensure that enough warps are available per thread block so that the scheduler can always find a warp that is ready to execute.

Blocks in a grid

The runtime system schedules blocks onto SMs. Several blocks can run concurrently on an SM.

There is usually no need for load balancing at this level. Simply ensure that enough thread blocks are available to fill all SMs several times over. It is useful to overprovision thread blocks to minimize the load imbalance at the end of a kernel, when some SMs are idle and no more thread blocks are ready to execute.

like image 171
Heatsink Avatar answered Sep 22 '22 19:09

Heatsink