Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

CUDA - Coalescing memory accesses and bus width

So the idea that I have about coalescing memory accesses in CUDA is, that threads in a warp should access contiguous memory addresses, as that will only cause a single memory transaction (the values on each address are then broadcast to the threads) instead of multiple ones that would be performed in a serial manner.

Now, my bus width is 48 bytes. This means I can transfer 48 bytes on each memory transaction, right? So, in order to take full advantage of the bus, I would need to be able to read 48 bytes at a time (by reading more than one byte per thread - memory transactions are executed by a warp). However, hypothetically, wouldn't having a single thread reading 48 bytes at a time provide the same advantage (I'm assuming that I can read 48 bytes at a time by reading a structure whose size is 48 bytes)?

My problem with coalescing is the transposing that I have to do on the data. I have lots of data, so transposing it takes time that I would rather use for something else if I could.

I'm on Compute Capability 2.0.

like image 675
Alexandre Dias Avatar asked Sep 25 '12 19:09

Alexandre Dias


People also ask

What is memory coalescing in Cuda?

Memory coalescing is a technique which allows optimal usage of the global memory bandwidth. That is, when parallel threads running the same instruction access to consecutive locations in the global memory, the most favorable access pattern is achieved.

What is global memory in CUDA?

The global memory is the total amount of DRAM of the GPU you are using. e.g I use GTX460M which has 1536 MB DRAM, therefore 1536 MB global memory. Shared memory is specified by the device architecture and is measured on per-block basis.

What is coalesced access?

Coalesced memory access or memory coalescing refers to combining multiple memory accesses into a single transaction. On the K20 GPUs on Stampede, every successive 128 bytes ( 32 single precision words) memory can be accessed by a warp (32 consecutive threads) in a single transaction.


2 Answers

The memory bus of your GPU isn't simply 48 bytes wide (which would be quite cumbersome as it is not a power of 2). Instead, it is composed of 6 memory channels of 8 bytes (64 bits) each. Memory transactions are usually much wider than the channel width, in order to take advantage of the memory's burst mode. Good transaction sizes start from 64 bytes to produce a size-8 burst, which matches nicely with 16 32-bit words of a half-warp on compute capability 1.x devices.

128 byte wide transactions are still a bit faster, and match the warp-wide 32-bit word accesses of compute capability 2.0 (and higher) devices. Cache lines are also 128 bytes wide to match. Note that all of these accesses must be aligned on a multiple of the transaction width in order to map to a single memory transaction.

Now regarding your actual problem, the best thing probably is to do nothing and to let the cache sort it out. This works the same way as you would explicitly do in shared memory, just that it is done for you by the cache hardware and no code is needed for it, which should make it slightly faster. The only thing to worry about is to have enough cache available so that each warp can have the necessary 32×32×4 bytes = 4kbytes of cache for word wide (e.g. float) or 8kbytes for double accesses. This means that it can be beneficial to limit the number of warps that are active at the same time to prevent them from thrashing each other's cache lines.

For special optimizations there is also the possibility to use vector types like float2 orfloat4, as all CUDA capable GPUs have load and store instructions that map 8 or 16 bytes into the same thread. However on compute capability 2.0 and higher I don't really see any advantage of using them in the general matrix transpose case, as they increase the cache footprint of each warp even more.

As the default setting of 16kB cache / 48kB shared memory just allows for four warps per SM to perform the transpose at any one time (provided you have no other memory accesses at the same time), it is probably beneficial to select the 48kB cache / 16kB shared memory setting over the default 16kB/48kB split using cudaDeviceSetCacheConfig(). Newer devices have larger caches and offer more different splits as well as opting in to using more than 48kB of shared memory. The details can be found in the linked documentation.

For completeness, I'll also mention that the warp shuffle instructions introduced with compute capability 3.0 allow to exchange register data within a warp without going through the cache or through shared memory. See Appendix B.22 of the CUDA C Programming Guide for details.
(Note that a version of the Programming Guide exists without this appendix. So if in your copy Appendix B.13 is about something else, reload it through the link provided).

like image 146
tera Avatar answered Sep 27 '22 19:09

tera


For purposes of coalescing, as you stated, you should focus on making the 32 threads in a warp access contiguous locations, preferably 32-byte or 128-byte aligned as well. Beyond that, don't worry about the physical address bus to the DRAM memory. The memory controller is composed of mostly independent partitions that are each 64bits wide. Your coalesced access coming out of the warp will be satisfied as quickly as possible by the memory controller. A single coalesced access for a full warp (32 threads) accessing an int or float will require 128 bytes to be retrieved anyway, i.e. multiple transactions on the physical bus to DRAM. When you are operating in caching mode, you can't really control the granularity of requests to global memory below 128 bytes at a time, anyway.

It's not possible to cause a single thread to request 48 bytes or anything like that in a single transaction. Even at the c code level if you think you are accessing a whole data structure at once, at the machine code level it gets converted to instructions that read 32 or 64 bits at a time.

If you feel that the caching restriction of 128 bytes at a time is penalizing your code, you can try running in uncached mode, which will reduce the granularity of global memory requests to 32 bytes at a time. If you have a scattered access pattern (not well coalesced) this option may give better performance.

like image 42
Robert Crovella Avatar answered Sep 27 '22 19:09

Robert Crovella