I have read about cache-as-ram mode (no-fill mode) numerous times and am wondering whether number one, can executable code be written and jumped to and if so is the executable code restricted to half of the level one cache (since the cache is really just sram).
The main difference between L1 L2 and L3 cache is that L1 cache is the fastest cache memory and L3 cache is the slowest cache memory while L2 cache is slower than L1 cache but faster than L3 cache. Cache is a fast memory in the computer. It holds frequently used data by the CPU.
In gaming, a larger pool of L3 cache can indeed benefit and improve frametimes, as this data is closer to the CPU cores and is more quickly accessed by the individual cores from the processor as it would when accessing it from DRAM.
The latency of L3 cache is even worse than L2, but having a large L3 cache is really important to prevent the CPU from needing to ask the RAM for needed data.
A level 2 cache (L2 cache) is a CPU cache memory that is located outside and separate from the microprocessor chip core, although, it is found on the same processor chip package. Earlier L2 cache designs placed them on the motherboard which made them quite slow.
Coreboot originally used CAR to save C stack in L1 data cache: http://rere.qmqm.pl/~mirq/cache_as_ram_lb_09142006.pdf http://www.coreboot.org/images/6/6c/LBCar.pdf
To execute code, we should switch unified L2 to CAR mode, then L1i (you should know that most modern desktop/application CPUs has separated L1: one for data - L1d - with read/write and other - read-only L1i for code) will be able to read code from CAR L2. Such mode was implemented in "UBRX - Universal BIOS Recovery console for x86 PCs" (akeo): http://pete.akeo.ie/2011/08/ubrx-l2-cache-as-instruction-ram.html
there are two L1 caches ondie: one for data and another for instructions, with the instruction one being read-only. Thus, the CAR setup method from coreboot only provides access to the L1 data cache, not the instruction one, so we can't simply upload our code into L1-Data and expect it to run.
There also was commercial company which created product to protect code from Frozen memory attacks (when attacker froze the DRAM, pulls DRAM module and move it to other PC to read, most data will be saved for tens of seconds). Their product loads entire os/hypervisor kernel into cache, both code and data were stored inside CPU. The product was vCage from PrivateCore (via Reverse engineering a Docker deployment on private cloud and Preventing reverse engineering with binary code and secret key, thanks to AdamNYC user for info):
("The vCage host is packaged as a stateless live image Linux KVM on a RAM disk").
https://security.stackexchange.com/questions/53165/is-it-possible-to-boot-an-encrypted-server-remotely-and-securely, comment by security.SE user northox :
"In the case of vCage you basically only need to trust Intel and Private Core. Briefly, vCage provide a L3 resident hypervisor validated with remote attestation."
Check slide 36 of https://forum.stanford.edu/events/2014/2014slides/plenary/Oded%20Stanford%20Annual%20Forum%202014.pdf#page=36
"The CPU as the perimeter of computation • Physical security is the CPU package itself • Loading stateless image into CPU cache"
Image loaded to CPU cache (L3); and the OS is linux! (slide 39)
Biggest challenges • Squeeze the Linux kernel into < 10MB while – Keeping all virtualization features – Keeping it stable (No OOM allowed) • Keep CPU cache under our control
This means that vCage was capable to execute code from Cache; but the company is now not-public part of Facebook, so there are no newer details or open source of linux patches.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With