I am confused by the perf events cache-misses
and L1-icache-load-misses,L1-dcache-load-misses,LLC-load-misses
. As when I tried to perf stat
all of them, the answer doesn't seem consistent:
%$: sudo perf stat -B -e cache-references,cache-misses,cycles,instructions,branches,faults,migrations,L1-dcache-load-misses,L1-dcache-loads,L1-dcache-stores,L1-icache-load-misses,LLC-loads,LLC-load-misses,LLC-stores,LLC-store-misses,LLC-prefetches ./my_app
523,288,816 cache-references (22.89%)
205,331,370 cache-misses # 39.239 % of all cache refs (31.53%)
10,163,373,365 cycles (39.62%)
13,739,845,761 instructions # 1.35 insn per cycle (47.43%)
2,520,022,243 branches (54.90%)
20,341 faults
147 migrations
237,794,728 L1-dcache-load-misses # 6.80% of all L1-dcache hits (62.43%)
3,495,080,007 L1-dcache-loads (69.95%)
2,039,344,725 L1-dcache-stores (69.95%)
531,452,853 L1-icache-load-misses (70.11%)
77,062,627 LLC-loads (70.47%)
27,462,249 LLC-load-misses # 35.64% of all LL-cache hits (69.09%)
15,039,473 LLC-stores (15.15%)
3,829,429 LLC-store-misses (15.30%)
The L1-*
and LLC-*
events are easy to understand, as I can tell they are read from the hardware counters in CPU.
But how does perf calculate cache-misses
event? From my understanding, if the cache-misses
counts the number of memory accesses that cannot be served by the CPU cache, then shouldn't it be equal to LLC-loads-misses + LLC-store-misses
? Clearly in my case, the cache-misses
is much higher than the Last-Level-Cache-Misses number.
The same confusion goes to cache-reference
. It is much lower than L1-dcache-loads
and much higher then LLC-loads
+LLC-stores
My Linux kernel and CPU info:
%$: uname -r
4.10.0-22-generic
%$: lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i5-7600K CPU @ 3.80GHz
Stepping: 9
CPU MHz: 885.754
CPU max MHz: 4200.0000
CPU min MHz: 800.0000
BogoMIPS: 7584.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 6144K
NUMA node0 CPU(s): 0-3
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp
You can also calculate a miss ratio by dividing the number of misses with the total number of content requests. For example, if you look over a period of time and find that the misses your cache experienced was11, and the total number of content requests was 48, you would divide 11 by 48 to get a miss ratio of 0.229.
the cache-misses event represents the number of memory access that could not be served by any of the cache.
Cache references to avoid the expensive calls especially in the update function. This can be done by caching these references on start if available or when available and checking for null/bool flat to avoid getting the reference again. Examples: Cache component references.
The built-in perf
events that you are interested in are mapping to the following hardware performance monitoring events on your processor:
523,288,816 cache-references (architectural event: LLC Reference)
205,331,370 cache-misses (architectural event: LLC Misses)
237,794,728 L1-dcache-load-misses L1D.REPLACEMENT
3,495,080,007 L1-dcache-loads MEM_INST_RETIRED.ALL_LOADS
2,039,344,725 L1-dcache-stores MEM_INST_RETIRED.ALL_STORES
531,452,853 L1-icache-load-misses ICACHE_64B.IFTAG_MISS
77,062,627 LLC-loads OFFCORE_RESPONSE (MSR bits 0, 16, 30-37)
27,462,249 LLC-load-misses OFFCORE_RESPONSE (MSR bits 0, 17, 26-29, 30-37)
15,039,473 LLC-stores OFFCORE_RESPONSE (MSR bits 1, 16, 30-37)
3,829,429 LLC-store-misses OFFCORE_RESPONSE (MSR bits 1, 17, 26-29, 30-37)
All of these events are documented in the Intel manual Volume 3. For more information on how to map perf
events to native events, see: Hardware cache events and perf and How does perf use the offcore events?.
But how does perf calculate cache-misses event? From my understanding, if the cache-misses counts the number of memory accesses that cannot be served by the CPU cache, then shouldn't it be equal to LLC-loads-misses + LLC-store-misses? Clearly in my case, the cache-misses is much higher than the Last-Level-Cache-Misses number.
LLC-load-misses
and LLC-store-misses
count only cacheable data read requests and RFO requests, respectively, that miss in the L3 cache. LLC-load-misses
also includes reads for page walking. Both exclude hardware and software prefetching. (The difference compared to Haswell is that some types of prefetch requests are counted.)
cache-misses
also includes prefetch requests and code fetch requests that miss in the L3 cache. All of these events only count core-originating requests. They include requests from uops irrespective of whether end up retiring and irrespective of the source of the response. It's unclear to me how a prefetch promoted to demand is counted.
Overall, I think cache-misses
is always larger than LLC-load-misses
+ LLC-store-misses
and cache-references
is always larger than LLC-loads
+ LLC-stores
.
The same confusion goes to cache-reference. It is much lower than L1-dcache-loads and much higher then LLC-loads+LLC-stores
It's only guaranteed that cache-reference
is larger than cache-misses
because the former counts requests irrespective of whether they miss the L3. It's normal for L1-dcache-loads
to be larger than cache-reference
because core-originated loads usually occur only when you have load instructions and because of the cache locality exhibited by many programs. But it's not necessarily always the case because of hardware prefetches.
The L1-* and LLC-* events are easy to understand, as I can tell they are read from the hardware counters in CPU.
No, it's a trap. They are not easy to understand.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With