Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Performance of atomic operations on shared memory

Tags:

cuda

gpgpu

How atomic operations perform when the address they are provided with resides in block shared memory? During atomic operation, does it pause accesses to the same shared memory bank by other threads inside block, or stops other threads from doing any instructions, or even stops threads across all blocks until the atomic operation is done?

like image 368
Farzad Avatar asked Oct 21 '13 21:10

Farzad


1 Answers

UPDATE: Since Maxwell (the generation after Kepler), NVIDIA has included hardware support for atomic operations in shared memory. Contention (i.e. if multiple threads are trying to operate on the same shared memory location) will tend to degrade performance, not unlike the looping that software must perform if there's contention on the pre-Maxwell locks.

Pre-Maxwell:

The shared memory hardware includes 1024 locks. If you call an atomic intrinsic that operates on shared memory, the compiler emits a short loop that acquires and conditionally releases the lock, or loops if the lock was not acquired. As a result, performance can be extremely data-dependent: if all 32 threads in a warp try to acquire different locks, they will all perform their atomic operation and release the locks without looping at all. On the other hand, if all 32 threads in a warp try to acquire the same lock, the warp will loop 31 times as each thread performs its atomic operation and releases the lock that all of the other threads are trying to acquire.

The lock acquired is determined by bits 2-11 of the shared memory address. So as with most memory operations in CUDA, operating on consecutive 32-bit addresses usually gives good performance.

like image 51
ArchaeaSoftware Avatar answered Oct 26 '22 10:10

ArchaeaSoftware