I've heard that TLB is maintained by the MMU not the CPU cache.
Then Does One TLB exist on the CPU and is shared between all processor or each processor has its own TLB cache?
Could anyone please explain relationship between MMU and L1, L2 Cache?
If the second context ID is assigned to that area, then the associated TLB entries can be shared across all of those processes.
A translation lookaside buffer (TLB) is a memory cache that stores recent translations of virtual memory to physical addresses for faster retrieval. When a virtual memory address is referenced by a program, the search starts in the CPU. First, instruction caches are checked.
Each core has its own MMU unit. Therefore initialization must be done twice.
Cache stores the actual contents of the memory. TLB on the other hand, stores only mapping. TLB speeds up the process of locating the operands in the memory. Cache speeds up the process of reading those operands by copying them to a faster physical memory.
The TLB caches the translations listed in the page table. Each CPU core can be running in a different context, with different page tables. This is what you'd call the MMU, if it was a separate "unit", so each core has its own MMU. Any shared caches are always physically-indexed / physically tagged, so they cache based on post-MMU physical address.
The TLB is a cache (of PTEs), so technically it's just an implementation detail that could vary by microarchitecture (between different implementations of the x86 architecture).
In practice, all that really varies is the size. 2-level TLBs are common now, to keep full TLB misses to a minimum but still be fast enough allow 3 translations per clock cycle.
It's much faster to just re-walk the page tables (which can be hot in local L1 data or L2 cache) to rebuild a TLB entry than to try to share TLB entries across cores. This is what sets the lower bound on what extremes are worth going to in avoiding TLB misses, unlike with data caches which are the last line of defence before you have to go off-core to shared L3 cache, or off-chip to DRAM on an L3 miss.
For example, Skylake added a 2nd page-walk unit (to each core). Good page-walking is essential for workloads where cores can't usefully share TLB entries (threads from different processes, or not touching many shared virtual pages).
A shared TLB would mean that invlpg
to invalidate cached translations when you do change a page table would always have to go off-core. (Although in practice an OS needs to make sure other cores running other threads of a multi-threaded process have their private TLB entries "shot down" during something like munmap
, using software methods for inter-core communication like an IPI (inter-processor interrupt).)
But with private TLBs, a context switch to a new process can just set a new CR3 (top-level page-directory pointer) and invalidate this core's whole TLB without having to bother other cores or track anything globally.
There is a PCID (process context ID) feature that lets TLB entries be tagged with one of 16 or so IDs so entries from different process's page tables can be hot in the TLB instead of needing to be flushed on context switch. For a shared TLB you'd need to beef this up.
Another complication is that TLB entries need to track "dirty" and "accessed" bits in the PTE. They're necessarily just a read-only cache of PTEs.
For an example of how the pieces fit together in a real CPU, see David Kanter's writeup of Intel's Sandybridge design. Note that the diagrams are for a single SnB core. The only shared-between-cores cache in most CPUs is the last-level data cache.
Intel's SnB-family designs all use a 2MiB-per-core modular L3 cache on a ring bus. So adding more cores adds more L3 to the total pool, as well as adding new cores (each with their own L2/L1D/L1I/uop-cache, and two-level TLB.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With