Write-combining buffers have been a feature of Intel CPUs going back to at least the Pentium 4 and probably before. The basic idea is that these cache-line sized buffers collect writes to the same cache line so they can be handled as a unit. As an example of their implications for software performance, if you don't write the full cache line, you may experience reduced performance.
For example, in Intel 64 and IA-32 Architectures Optimization Reference Manual section "3.6.10 Write Combining" starts with the following description (emphasis added):
Write combining (WC) improves performance in two ways:
• On a write miss to the first-level cache, it allows multiple stores to the same cache line to occur before that cache line is read for ownership (RFO) from further out in the cache/memory hierarchy. Then the rest of line is read, and the bytes that have not been written are combined with the unmodified bytes in the returned line.
• Write combining allows multiple writes to be assembled and written further out in the cache hierarchy as a unit. This saves port and bus traffic. Saving traffic is particularly important for avoiding partial writes to uncached memory.
There are six write-combining buffers (on Pentium 4 and Intel Xeon processors with a CPUID signature of family encoding 15, model encoding 3; there are 8 write-combining buffers). Two of these buffers may be written out to higher cache levels and freed up for use on other write misses. Only four write- combining buffers are guaranteed to be available for simultaneous use. Write combining applies to memory type WC; it does not apply to memory type UC.
There are six write-combining buffers in each processor core in Intel Core Duo and Intel Core Solo processors. Processors based on Intel Core microarchitecture have eight write-combining buffers in each core. Starting with Intel microarchitecture code name Nehalem, there are 10 buffers available for write- combining.
Write combining buffers are used for stores of all memory types. They are particularly important for writes to uncached memory ...
My question is whether write combining applies to WB memory regions (that's the "normal" memory you are using 99.99% of the time in user programs), when using normal stores (that's anything other than non-temporal stores, i.e., the stores you are using 99.99% of the time).
The text above is hard to interpret exactly, and since not to have been updated since the Core Duo era. You have the part that says write combing "applies to WC memory but not UC", but of course that leaves out all the other types, like WB. Later you have that "[WC is] particularly important for writes to uncached memory", seemly contradicting the "doesn't apply to UC part".
So are write combining buffers used on modern Intel chips for normal stores to WB memory?
Yes, the write combining and coalescing properties of the LFBs support all memory types except the UC type. You can observe their impact experimentally using the following program. It takes two parameters as input:
STORE_COUNT
: the number of 8-byte stores to perform sequentially.INCREMENT
: the stride between consecutive stores.There are 4 different values of INCREMENT
that are particularly interesting:
STORE_COUNT
). Both write combining and coalescing will take an effect.There is another parameter, ITERATIONS
, which is used to repeat the same experiment many times to make reliable measurements. You can keep it at 1000.
%define ITERATIONS 1000
BITS 64
DEFAULT REL
section .bss
align 64
bufsrc: resb STORE_COUNT*64
section .text
global _start
_start:
mov ecx, ITERATIONS
.loop:
; Flush all the cache lines to make sure that it takes a substantial amount of time to fetch them.
lea rsi, [bufsrc]
mov edx, STORE_COUNT
.flush:
clflush [rsi]
sfence
lfence
add rsi, 64
sub edx, 1
jnz .flush
; This is the main loop where the stores are issued sequentially.
lea rsi, [bufsrc]
mov edx, STORE_COUNT
.inner:
mov [rsi], rdx
sfence ; Prevents potential combining in the store buffer.
add rsi, INCREMENT
sub edx, 1
jnz .inner
; Spend sometime doing nothing so that all the LFBs become free for the next iteration.
mov edx, 100000
.wait:
lfence
sub edx, 1
jnz .wait
sub ecx, 1
jnz .loop
; Exit.
xor edi,edi
mov eax,231
syscall
I recommend the following setup:
sudo wrmsr -a 0x1A4 0xf
. This ensures that they will not interfere (or have minimal interference) with the experiments.The L1D_PEND_MISS.FB_FULL
performance counter enables us to capture the effect of write combining regarding how it impacts the availability of LFBs. It is supported on Intel Core and later. It is described as follows:
Number of times a request needed a FB (Fill Buffer) entry but there was no entry available for it. A request includes cacheable/uncacheable demands that are load, store or SW prefetch instructions.
First run the code without the inner loop and make sure that L1D_PEND_MISS.FB_FULL
is zero, which means the the flush loop has no impact on the event count.
The following figure plots STORE_COUNT
against total L1D_PEND_MISS.FB_FULL
divided by ITERATIONS
.
We can observe the following:
L1D_PEND_MISS.FB_FULL
is zero for any number of stores.L1D_PEND_MISS.FB_FULL
is larger than zero when the number of stores is larger than 10.Later you have that "[WC is] particularly important for writes to uncached memory", seemly contradicting the "doesn't apply to UC part".
Both WC and UC are classified as uncachable. So you can put the two statements together to deduce that WC is particularly important for writes to WC memory.
See also: Where is the Write-Combining Buffer located? x86.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With