Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Threads and Thread Groups on the GPU

I'm wondering about the "grids" of threads/thread groups I can dispatch on the GPU. I'm using Direct Compute so I'll give a concrete example using that API. For example, if I call Dispatch(2,2,2), I understand it dispatches 2x2x2 = 8 thread groups in total. However, what's the difference if I call Dispatch(8,1,1), which also dispatches 8x1x1 = 8 thread groups? Is there any performance difference?

P.S. same question with threads on the GPU. What's the difference between numthreads(2,2,2) and numthreads(8,1,1), declared in the compute (.hlsl) file?

Any help would be appreciated.

like image 776
l3utterfly Avatar asked Sep 22 '12 00:09

l3utterfly


2 Answers

From a pure performance perspective, there really isn't a difference per-se, as the ability to define the dimensions of the grid of thread groups or blocks is more for correctly applying the workload to the abstraction of the problem itself rather than for performance. In other words if your problem abstracts well to a 3D-volumetric grid, then while the same number of thread-groups/blocks may be created using a mapping that converts the 3D-problem to a 1D-linear representation, the abstraction of that mapping can be a bit cumbersome to deal with. Furthermore, if the mapping is too complex, it could create a small performance hit.

The number of thread-groups/blocks you create though, and the number of threads in those blocks is important. In the case of an Nvidia GPU, each thread-group is assigned to a SMX processor on the GPU, and mapping multiple thread-blocks and their associated threads to a SMX is necessary for hiding latency due to memory accesses, etc. Additionally, you want to have enough threads in a thread-group/block that you take advantage of the SIMT (same-instruction/multiple-thread) capabilities of the GPU. This means for each clock-cycle (or set of clock-cycles) inside an Nvidia GPU's SMX it can execute X number of threads at the same time in lock-step. This number is called the "thread warp" size. You want to have enough threads in the block to fill this warp count, otherwise the resources of the GPU's core streaming processors aren't being used up when the block is running on the GPU's individual SMX processors. This number is 32 threads on Nvidia Fermi GPU's. In CUDA you can query this information based on the GPU you're using, although I'm assuming with DirectCompute this will be abstracted away. ATI cards also have a "thread width" to their streaming processors as well, which is 64 threads per "wavefront".

Ideally in the end you want to have enough threads in your block that you fill the number of threads in the wavefront or warp size of the GPU, and then lots and lots of blocks that can map to each streaming processor on the GPU such that they can be kept in-flight and swapped out on the streaming processors whenever a high-latency operation is encountered. This maximizes the compute bandwidth of the GPU.

like image 58
Jason Avatar answered Oct 04 '22 14:10

Jason


A block can arrange threads in 3 dimensional ways..

Lets go with an example. Suppose you want to dispatch 32 threads. These 32 threads can be arranged in 3 dimensional ways. Imagine axis system with X, Y and Z axis. You can arrange all 32 threads along with X axis only.i.e. (32,1,1). Or you can arrange it with X and Y axis together (like 2D Matrix) (8,4,1), i.e. 8 columns, 4 rows. Or you can also arrange in 3 dimensional way, (8,2,2) i.e. 8 columns, 2 rows, and width 2(Imagine a cube with height 8, width 2, length 2).. Try to imagine and build pictures in your mind.

like image 38
sandeep.ganage Avatar answered Oct 04 '22 15:10

sandeep.ganage