In Dmitry Vyukov's excellent bounded mpmc queue written in C++ See: http://www.1024cores.net/home/lock-free-algorithms/queues/bounded-mpmc-queue
He adds some padding variables. I presume this is to make it align to a cache line for performance.
I have some questions.
__attribute__ ((aligned (64)))
instead. why would padding before a buffer pointer help with performance? isn't just the pointer loaded into the cache so it's really only the size of a pointer?
static size_t const cacheline_size = 64; typedef char cacheline_pad_t [cacheline_size]; cacheline_pad_t pad0_; cell_t* const buffer_; size_t const buffer_mask_; cacheline_pad_t pad1_; std::atomic<size_t> enqueue_pos_; cacheline_pad_t pad2_; std::atomic<size_t> dequeue_pos_; cacheline_pad_t pad3_;
Would this concept work under gcc for c code?
cache lines are 32 bytes in size and are aligned to 32 byte offsets. memory locations which are offset by multiples of 4K bytes compete for 4 L1 cache lines. memory locations which are offset by multiples of 64K bytes compete for 4 L2 cache lines. L1 has separate cache lines for instructions and data - Harvard.
Cache line size is 64 bytes.
Each cache line/slot matches a memory block. That means each cache line contains 16 bytes. If the cache is 64Kbytes then 64Kbytes/16 = 4096 cache lines. To address these 4096 cache lines, we need 12 bits (212 = 4096).
The chunks of memory handled by the cache are called cache lines. The size of these chunks is called the cache line size. Common cache line sizes are 32, 64 and 128 bytes. A cache can only hold a limited number of lines, determined by the cache size.
It's done this way so that different cores modifying different fields won't have to bounce the cache line containing both of them between their caches. In general, for a processor to access some data in memory, the entire cache line containing it must be in that processor's local cache. If it's modifying that data, that cache entry usually must be the only copy in any cache in the system (Exclusive mode in the MESI/MOESI-style cache coherence protocols). When separate cores try to modify different data that happens to live on the same cache line, and thus waste time moving that whole line back and forth, that's known as false sharing.
In the particular example you give, one core can be enqueueing an entry (reading (shared) buffer_
and writing (exclusive) only enqueue_pos_
) while another dequeues (shared buffer_
and exclusive dequeue_pos_
) without either core stalling on a cache line owned by the other.
The padding at the beginning means that buffer_
and buffer_mask_
end up on the same cache line, rather than split across two lines and thus requiring double the memory traffic to access.
I'm unsure whether the technique is entirely portable. The assumption is that each (see comments)cacheline_pad_t
will itself be aligned to a 64 byte (its size) cache line boundary, and hence whatever follows it will be on the next cache line. So far as I know, the C and C++ language standards only require this of whole structures, so that they can live in arrays nicely, without violating alignment requirements of any of their members.
The attribute
approach would be more compiler specific, but might cut the size of this structure in half, since the padding would be limited to rounding up each element to a full cache line. That could be quite beneficial if one had a lot of these.
The same concept applies in C as well as C++.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With