I have a basic question about the policy followed by the memory system.
Consider a core with private L1 and L2 caches. After L2 cache we have a bus on which the coherence traffic runs. Now, if a cache line for address(X) is evicted from the L2 cache, is it necessary to evict that address from the L1 cache ??
The reason for eviction can be that it helps in maintaining the invariant of the coherence protocol [if a line in l2 shows invalid this core does not contain this address].
L1 is "level-1" cache memory, usually built onto the microprocessor chip itself. For example, the Intel MMX microprocessor comes with 32 thousand bytes of L1. L2 (that is, level-2) cache memory is on a separate chip (possibly on an expansion card) that can be accessed more quickly than the larger "main" memory.
Most recently used (MRU) eviction policy removes the item used the most recently. Least frequently used (LFU) eviction policy removes the item used the least often.
Cache eviction is a feature where file data blocks in the cache are released when fileset usage exceeds the fileset soft quota, and space is created for new files. The process of releasing blocks is called eviction. However, file data is not evicted if the file data is dirty.
When it comes to speed, the L2 cache lags behind the L1 cache but is still much faster than your system RAM. The L1 memory cache is typically 100 times faster than your RAM, while the L2 cache is around 25 times faster.
There are three different designs and all are used.
Exclusive: Data in the L1 cache is never in the L2 cache. Data in the L2 cache is never in the L1 cache.
Inclusive: Data in the L1 cache must also be in the L2 cache.
Neither: Data in the L1 cache may or may not be in the L2 cache.
Each of these has advantages and disadvantages. The inclusive scheme allows the cache coherency protocol to ignore the L1 cache -- if data isn't in the L2 cache, it isn't in the L1 cache. But the exclusive scheme makes the most effective use of precious cache memory.
Inclusive designs are becoming more popular because faster inter-core synchronization is becoming more important than having slightly larger effective cache sizes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With