Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Minimum number of GPU threads to be effective

Tags:

cuda

gpu

I'm going to parallelize on CUDA a local search algorithm for some optimization problem. The problem is very hard, so the size of the practically solvable problems is quite small. My concern is that the number of threads planned to run in one kernel is insufficient to obtain any speedup on GPU (even assuming all threads are coalesced, free of bank conflicts, non-branching etc.). Let's say a kernel is launched for 100 threads. Is it reasonable to expect any profit from using GPU? What if the number of threads is 1000? What additional information is needed to analyze the case?

like image 410
AdelNick Avatar asked Aug 11 '11 17:08

AdelNick


People also ask

How many threads can a GPU handle?

There are 4 to 10 threads per core on the GPU. GPU follows Data-parallelism and applies the same operation to multiple data items (single instruction, multiple data {SIMD}). GPU cards are primarily designed for fine-grained, data-parallel computation. The input data process the algorithm.

How many threads should I use in CUDA?

Number of active blocks per Streaming Multiprocessor etc. However, according to the CUDA manuals, it is better to use 128/256 thread per blocks if you are not worry about deep details about GPGPUs.

How many threads can be executed at a time in CUDA?

There are 32 threads per warp. That is a constant across all cuda card as of now.

How many threads is a warp?

A warp is a set of 32 threads within a thread block such that all the threads in a warp execute the same instruction. These threads are selected serially by the SM. Once a thread block is launched on a multiprocessor (SM), all of its warps are resident until their execution finishes.


1 Answers

100 threads is not really enough. Ideally you want a size that can be divided in to at least as many thread blocks as there are multiprocessors (SMs) on the GPU, otherwise you will be leaving processors idle. Each thread block should have no fewer than 32 threads, for the same reason. Ideally, you should have a small multiple of 32 threads per block (say 96-512 threads), and if possible, multiple of these blocks per SM.

At a minimum, you should try to have enough threads to cover the arithmetic latency of the SMs, which means that on a Compute Capability 2.0 GPU, you need about 10-16 warps (groups of 32 threads) per SM. They don't all need to come from the same thread block, though. So that means, for example, on a Tesla M2050 GPU with 14 SMs, you would need at least 4480 threads, divided into at least 14 blocks.

That said, fewer threads than this could also provide a speedup -- it depends on many factors. If the computation is bandwidth bound, for example, and you can keep the data in device memory, then you could get a speedup because GPU device memory bandwidth is higher than CPU memory bandwidth. Or, if it is compute bound, and there is a lot of instruction-level parallelism (independent instructions from the same thread), then you won't need as many threads to hide latency. This latter point is described very well in Vladimir Volkov's "Better performance at lower occupancy" talk from GTC 2010.

The main thing is to make sure you use all of the SMs: without doing so you aren't using all of the computation performance or bandwidth the GPU can provide.

like image 72
harrism Avatar answered Nov 19 '22 22:11

harrism