Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Allocate static memory in CPU cache in c/c++ : is it possible?

Is it possible to explicitly create static objects in the CPU cache, sort of to make sure those objects always stay in the cache so no performance hit is ever taken from reaching all the way into RAM or god forbid - hdd virtual memory?

I am particular interested in targeting the large L3 shared cache, not intending to target L1, L2, instruction or any other cache, just the largest on-die chub of memory there is.

And just to clarify to differentiate from other threads I searched before posting this, I am not interested in privatizing the entire cache, just a small, few classes worth of region.

like image 266
dtech Avatar asked Jan 13 '12 17:01

dtech


People also ask

Is cache memory a static memory?

Cache memory is a high-speed static random access memory, often referred to as CPU memory. This SRAM can be accessed by a computer microprocessor far quicker than the regular random access memory (RAM) to store program instructions and data for repeated use.

What is static memory allocation in C?

Static allocation is what happens when you declare a static or global variable. Each static or global variable defines one block of space, of a fixed size. The space is allocated once, when your program is started (part of the exec operation), and is never freed.

Which of the following is not true about cache memory?

Cache memory is not associated with ROM, rather it is called as CPU or Random access memory which a computer microprocessor can access more quickly than a regular RAM.


2 Answers

No. Cache is not addressable, so you can't allocate objects in it.

What it seems like you meant to ask is: Having allocated space in virtual memory, can I ensure that I always get cache hits?

This is a more complicated question, and the answer is: partly.

You definitely can avoid being swapped out to disk, by using the memory management API of your OS (e.g. mlock()) to mark the region as non-pageable. Or allocate from "non-paged pool" to begin with.

I don't believe there's a similar API to pin memory into CPU cache. Even if you could reserve CPU cache for that block, you can't avoid cache misses. If another core writes to the memory, ownership WILL be transferred, and you WILL suffer a cache miss and associated bus transfer (possibly to main memory, possibly to the cache of the other core).

As Mathew mentions in his comment, you can also force the cache miss to occur in parallel with other useful work in the pipeline, so that the data is in cache when you need it.

like image 53
Ben Voigt Avatar answered Sep 28 '22 05:09

Ben Voigt


You could run another thread that loops over the data and brings it into the L3 cache.

like image 31
Danny Avatar answered Sep 28 '22 04:09

Danny