Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

GPU Shared Memory Bank Conflict

I am trying to understand how bank conflicts take place.
if i have an array of size 256 in global memory and i have 256 threads in a single Block, and i want to copy the array to shared memory. therefore every thread copies one element.

shared_a[threadIdx.x]=global_a[threadIdx.x]

does this simple action result in a bank conflict?

suppose now that the size of the array is larger than the number of threads, so i am now using this to copy the global memory to the shared memory:

tid = threadIdx.x;
for(int i=0;tid+i<N;i+=blockDim.x)
     shared_a[tid+i]=global_a[tid+i];

does the above code result in a bank conflict?

like image 811
scatman Avatar asked Dec 09 '10 08:12

scatman


People also ask

What is bank conflicts GPU?

The shared memory that can be accessed in parallel is divided into modules (also called banks). If two memory locations (addresses) occur in the same bank, then you get a bank conflict during which the access is done serially, losing the advantages of parallel access.

Why is shared memory divided into 32 banks?

To achieve high memory bandwidth for concurrent accesses, shared memory is divided into equally sized memory modules (banks) that can be accessed simultaneously.

What is memory hierarchy and bank conflict in CUDA?

Shared memory only exists for the lifetime of the block. Bank conflicts can slow access down. It's fastest when all threads read from different banks or all threads of a warp read exactly the same value. Bank conflicts are only possible within a warp. No bank conflicts occur between different warps.

What is GPU memory bank?

The shared memory is divided into multiple banks (similar to banks in DRAM modules). Each bank can service only one request at a time. The shared memory is therefore interleaved to increase the throughput. If the shared memory is interleaved by 32 bits, then the bandwidth of each bank is 32 bits or one float data type.


2 Answers

The best way to check this would be to profile your code using the "Compute Visual Profiler"; this comes with the CUDA Toolkit. Also there's a great section in GPU Gems 3 on this - "39.2.3 Avoiding Bank Conflicts".

"When multiple threads in the same warp access the same bank, a bank conflict occurs unless all threads of the warp access the same address within the same 32-bit word" - First thing there are 16 memory banks each 4bytes wide. So essentially, if you have any thread in a half warp reading memory from the same 4bytes in a shared memory bank, you're going to have bank conflicts and serialization etc.

OK so your first example:

First lets assume your arrays are say for example of the type int (a 32-bit word). Your code saves these ints into shared memory, across any half warp the Kth thread is saving to the Kth memory bank. So for example thread 0 of the first half warp will save to shared_a[0] which is in the first memory bank, thread 1 will save to shared_a[1], each half warp has 16 threads these map to the 16 4byte banks. In the next half warp, the first thread will now save its value into shared_a[16] which is in the first memory bank again. So if you use a 4byte word such int, float etc then your first example will not result in a bank conflict. If you use a 1 byte word such as char, in the first half warp threads 0, 1, 2 and 3 will all save their values to the first bank of shared memory which will cause a bank conflict.

Second example:

Again this will all depend on the size of the word you are using, but for the example I'll use a 4byte word. So looking at the first half warp:

Number of threads = 32

N = 64

Thread 0: Will write to 0, 31, 63 Thread 1: Will write to 1, 32

All threads across the half warp execute concurrently so the writes to shared memory shouldn't cause bank conflicts. I'll have to double check this one though.

Hope this helps, sorry for the huge reply!

like image 106
Ljdawson Avatar answered Oct 05 '22 04:10

Ljdawson


In both cases threads access shared memory with consecutive address. It depends on the element size of shared memory, but consecutive access to shared memory by a warp of threads does not result in a bank conflict for "small" element sizes.

Profiling this code with NVIDIA Visual Profiler shows that for element size smaller than 32 and a multiple of 4 (4, 8, 12, ... , 28), consecutive access to the shared memory does not result in a bank conflict. Element size of 32, however, results in bank conflict.


Answer by Ljdawson contains some outdated information:

... If you use a 1 byte word such as char, in the first half warp threads 0, 1, 2 and 3 will all save their values to the first bank of shared memory which will cause a bank conflict.

This may be true for old GPUs, but for recent GPUs with cc >= 2.x, they don't cause bank conflicts, effectively due to the broadcast mechanism(link). Following quote is from CUDA C PROGRAMMING GUIDE (v8.0.61) G3.3. Shared Memory.

A shared memory request for a warp does not generate a bank conflict between two threads that access any address within the same 32-bit word (even though the two addresses fall in the same bank): In that case, for read accesses, the word is broadcast to the requesting threads (multiple words can be broadcast in a single transaction) and for write accesses, each address is written by only one of the threads (which thread performs the write is undefined).

This means, in particular, that there are no bank conflicts if an array of char is accessed as follows, for example:

   extern __shared__ char shared[];
   char data = shared[BaseIndex + tid];
like image 20
nglee Avatar answered Oct 05 '22 04:10

nglee