Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How cache coherence affects the performance in this case

Say if core A is modifying a variable X and core B is reading that variable (X). Ofcourse in this case a cache coherence protocol will update the cache of core B, because X has been modified by core A and ofcourse this cache coherence will slowdown execution on core B. However, will this cache coherence also affect the performance of core A, assuming that variable X resides in its cache.

like image 762
pythonic Avatar asked Apr 07 '12 16:04

pythonic


People also ask

Does coherence protocol affect performance positively?

Yes, implementing MESI(f) or any coherence protocol will have an impact on performance, but that impact is actually positive when compared to the case where no coherence protocol exists. In that case, every read/write would need to go to main memory (ie, your application will be 100's of times SLOWER).

What is cache coherence and why is it important?

The Cache Coherence Problem As multiple processors operate in parallel, and independently multiple caches may possess different copies of the same memory block, this creates cache coherence problem. Cache coherence schemes help to avoid this problem by maintaining a uniform state for each cached block of data.

What is cache coherence problem and how it can be solved?

Cache coherence refers to the problem of keeping the data in these caches consistent. The main problem is dealing with writes by a processor. There are two general strategies for dealing with writes to a cache: Write-through - all data written to the cache is also written to memory at the same time.

What is cache coherence and how is it eliminated?

Cache coherence refers to the concept of shared resource data being stored in various local caches uniformly at the same time. This problem can be solved either through software or hardware, with the software option being tougher.


1 Answers

Yes. There are several ways that it can affect the performance. The standard protocol that people use is some variant of MSI (Modified, Shared, Invalid) sometimes with O (Owner) and often E (Exclusive) added to the protocol. In your example, core A would start in the Modified (or Exclusive) state, and core B's read would force core A to change it to the Shared state. This action takes up cycles in the cache since there are only so many operations that the core can perform at any given time. The impact of this on Core A isn't very high though since it is not in the critical path. The bigger impact is if Core A does a write again. Since the cache line is in the shared (or invalid) state, it must issue a request to upgrade itself to M or E. That request must go to Core B. This operation is on the critical path and the write can't finish until the cache block is upgraded. That said, writes are generally buffered and the processor will generally not be blocked on this operation.

like image 194
Nathan Binkert Avatar answered Sep 22 '22 14:09

Nathan Binkert