Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does CLFLUSH work for an address that is not in cache yet?

We are trying to use the Intel CLFLUSH instruction to flush the cache content of a process in Linux at the userspace.

We create a very simple C program that first access a large array and then call the CLFLUSH to flush the virtual address space of the whole array. We measure the latency it takes for CLFLUSH to flush the whole array. The size of the array in the program is an input and we vary the input from 1MB to 40MB with a step of 2MB.

In our understanding, the CLFLUSH should flush the content in the cache. So we expect to see the latency of flushing the whole array first increase linearly in terms of the size of the array, and then the latency should stop increasing after the array size is larger than 20MB, which is the size of the LLC of our program.

However, the experiment result is quite surprising, as shown in the figure. The latency does not stop increasing after the array size is larger than 20MB.

We are wondering if the CLFLUSH could potentially bring in the address before CLFLUSH flushes the address out of the cache, if the address is not in the cache yet? We also tried to search in the Intel software developer manual, and didn't find any explanation of what CLFLUSH will do if an address is not in the cache.

enter image description here

Below is the data we used to draw the figure. The first column is the size of the array in KB, and the second column is the latency of flushing the whole array in seconds.

Any suggestion/advice is more than appreciated.

[Modified]

The previous code is unnecessary. CLFLUSH can be done in userspace much easier, although it has the similar performance. So I deleted the messy code to avoid confusion.

SCENARIO=Read Only
1024,.00158601000000000000
3072,.00299244000000000000
5120,.00464945000000000000
7168,.00630479000000000000
9216,.00796194000000000000
11264,.00961576000000000000
13312,.01126760000000000000
15360,.01300500000000000000
17408,.01480760000000000000
19456,.01696180000000000000
21504,.01968410000000000000
23552,.02300760000000000000
25600,.02634970000000000000
27648,.02990350000000000000
29696,.03403090000000000000
31744,.03749210000000000000
33792,.04092470000000000000
35840,.04438390000000000000
37888,.04780050000000000000
39936,.05163220000000000000

SCENARIO=Read and Write
1024,.00200558000000000000
3072,.00488687000000000000
5120,.00775943000000000000
7168,.01064760000000000000
9216,.01352920000000000000
11264,.01641430000000000000
13312,.01929260000000000000
15360,.02217750000000000000
17408,.02516330000000000000
19456,.02837180000000000000
21504,.03183180000000000000
23552,.03509240000000000000
25600,.03845220000000000000
27648,.04178440000000000000
29696,.04519920000000000000
31744,.04858340000000000000
33792,.05197220000000000000
35840,.05526950000000000000
37888,.05865630000000000000
39936,.06202170000000000000
like image 418
Mike Avatar asked Mar 09 '16 19:03

Mike


People also ask

What is Clflush?

Description ¶ Invalidates from every level of the cache hierarchy in the cache coherence domain the cache line that contains the linear address specified with the memory operand. If that cache line contains modified data at any level of the cache hierarchy, that data is written back to memory.

What is Clflush size?

clflush size : 64. cache_alignment : 128. address sizes : 36 bits physical, 48 bits virtual. power management: Look for lm (Long Mode) under flags - if you can see it, you have a 64 bit CPU, if not, it is a 32 bit one.


2 Answers

You want to look at the new optimization guide for Skylake, Intel came out with another version of clflush, called clflush_opt, which is weakly ordered and would perform much better in your scenario.

See section 7.5.7 in here - http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf

In general, CLFLUSHOPT throughput is higher than that of CLFLUSH, because CLFLUSHOPT orders itself with respect to a smaller set of memory traffic as described above and in Section 7.5.6. The throughput of CLFLUSHOPT will also vary. When using CLFLUSHOPT, flushing modified cache lines will experience a higher cost than flushing cache lines in non-modi fied states. CLFLUSHOPT will provide a performance benefit over CLFLUSH for cache lines in any coherenc e states. CLFLUSHOPT is more suitable to flush large buffers (e.g. greater than many KBytes), comp ared to CLFLUSH. In single-threaded applications, flushing buffers using CLFLUSHOPT may be up to 9X better than using CLFLUSH with Skylake microarchi- tecture.

The section also explains that flushing modified data is slower, which obviously comes from the writeback penalty.

As for the increasing latency, are you measuring the overall time is takes to go over the address range and clflush each line? In that case you're linearly dependent on the array size, even when it passes the LLC size. Even if the lines aren't there, the clflush would have to get processed by the execution engine and memory unit, and lookup the entire cache hierarchy for each line, even if it's not present.

like image 98
Leeor Avatar answered Sep 22 '22 01:09

Leeor


This doesn't explain the knee in the read-only graph, but does explain why it doesn't plateau.


I didn't get around to testing locally to look into the difference between the hot and cold cache case, but I did come across a performance number for clflush:

This AIDA64 instruction latency/throughput benchmark repository lists a single-socket Haswell-E CPU (i7-5820K) as having a clflush throughput of one per ~99.08 cycles. It doesn't say whether that's for the same address repeatedly, or what.

So clflush isn't anywhere near free even when it doesn't have to do any work. It's still a microcoded instruction, not heavily optimized because it's usually not a big part of the CPUs workload.

Skylake is getting ready for that to change, with support for persistent memory connected to the memory controller: On Skylake (i5-6400T), measured throughput was:

  • clflush: one per ~66.42cycles
  • clflushopt: one per ~56.33cycles

Perhaps clflushopt is more of a win when some of the lines are actually dirty cache that needs flushing, maybe when L3 is busy from other cores doing the same thing. Or maybe they just want to get software using the weakly-ordered version ASAP, before making even bigger improvements to throughput. It's ~15% faster in this case, which is not bad.

like image 24
Peter Cordes Avatar answered Sep 24 '22 01:09

Peter Cordes