I'm putting together a small patch for the cachegrind/callgrind tool in valgrind which will auto-detect, using completely generic code, CPU instruction and cache configuration (right now only x86/x64 auto-configures, and other architectures don't provide CPUID type configuration to non-privileged code). This code will need to execute entirely in a non-privileged context i.e. pure user mode code. It also needs to be portable across very different POSIX implementations, so grokking /proc/cpuinfo won't do as one of our destination systems doesn't have such a thing.
Detecting the frequency of the CPU, the number of caches, their sizes, and even cache line size can all be done using 100% generic POSIX code which has no CPU-specific opcodes whatsoever (just a lot of reasonable assumptions, such as that adding two numbers together, if without memory or register dependency stalls, probably will be executed in a single cycle). This part is fairly straightforward.
What isn't so straightforward, and why I ask StackOverflow, is how to detect cache line associativity for a given cache? Associativity is how many places in a cache can contain a given cache line from main memory. I can see that L1 cache associativity could be detected, but L2 cache? Surely the L1 associativity gets in the way?
I appreciate this is probably a problem which cannot be solved. But I throw it onto StackOverflow and hope someone knows something I don't. Note that if we fail here, I'll simply hard code in an associativity default of four way, assuming it wouldn't make a huge difference to results.
Thanks,
Niall
On x86, you can use the CPUID instruction with function 2 to determine various properties of the cache and the TLB.
When the processor accesses a part of memory that is not already in the cache it loads a chunk of the memory around the accessed address into the cache, hoping that it will soon be used again. The chunks of memory handled by the cache are called cache lines. The size of these chunks is called the cache line size.
On the Task Manager screen, click on the Performance tab > click on CPU in the left pane. In the right-pane, you will see L1, L2 and L3 Cache sizes listed under “Virtualization” section. As you can see in the image above, the CPU in this case has very small L1, L2 and L3 Cache size.
Here's a scheme:
Have a memory access pattern with a stride S , and number of unique elements accessed = N. The test first touches each unique element, and then measures the average time to access each element, by accessing the same pattern a very large number of times.
Example: for S = 2 and N = 4 the address pattern would be 0,2,4,6,0,2,4,6,0,2,4,6,...
Consider a multi-level cache hierarchy. You can make the following reasonable assumptions:
These 2 assumptions allow us to say that if two addresses map to the same set in n+1 th cache(say L2), then they must map to the same set in nth cache(say L1).
Say you know the sizes of L1, L2 caches. You need to find the associativity of L2 cache.
You get the following regimes:
So, if you plot average access time against N (when S = size of L2), you will see a step-like plot. The end of the lowest step gives you the associativity of L1. The next step gives you the associativity of L2.
You can repeat the same procedure between L2-L3 and so-on. Please let me know if that helps. The method of obtaining cache parameters by varying the stride of a memory access pattern is similar to that used by the LMBENCH benchmark. I don't know if lmbench infers associativity too.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With