I'm coming largely from a c++ background, but I think this question applies to threading in any language. Here's the scenario:
We have two threads (ThreadA and ThreadB), and a value x in shared memory
Assume that access to x is appropriately controlled by a mutex (or other suitable synchronization control)
If the threads happen to run on different processors, what happens if ThreadA performs a write operation, but its processor places the result in its L2 cache rather than the main memory? Then, if ThreadB tries to read the value, will it not just look in its own L1/L2 cache / main memory and then work with whatever old value was there?
If that's not the case, then how is this issue managed?
If that is the case, then what can be done about it?
On modern multi-core processors, independent workloads often interfere with each other by competing for shared cache space. However, for multi-threaded workloads, where a single copy of data can be accessed by multiple threads, the threads can cooperatively share cache.
Cache coherence is the discipline which ensures that the changes in the values of shared operands (data) are propagated throughout the system in a timely fashion. Changes to the data in any cache must be propagated to other copies (of that cache line) in the peer caches.
There are other benefits: Both hyperthreads can share L1-code cache for instance, if there's a tight loop that is shared between the two threads.
Your example would work just fine.
Multiple processors use a coherency protocol such as MESI to ensure that data remains in sync between the caches. With MESI, each cache line is considered to be either modified, exclusively held, shared between CPU's, or invalid. Writing a cache line that is shared between processors forces it to become invalid in the other CPU's, keeping the caches in sync.
However, this is not quite enough. Different processors have different memory models, and most modern processors support some level of re-ordering memory accesses. In these cases, memory barriers are needed.
For instance if you have Thread A:
DoWork();
workDone = true;
And Thread B:
while (!workDone) {}
DoSomethingWithResults()
With both running on separate processors, there is no guarantee that the writes done within DoWork() will be visible to thread B before the write to workDone and DoSomethingWithResults() would proceed with potentially inconsistent state. Memory barriers guarantee some ordering of the reads and writes - adding a memory barrier after DoWork() in Thread A would force all reads/writes done by DoWork to complete before the write to workDone, so that Thread B would get a consistent view. Mutexes inherently provide a memory barrier, so that reads/writes cannot pass a call to lock and unlock.
In your case, one processor would signal to the others that it dirtied a cache line and force the other processors to reload from memory. Acquiring the mutex to read and write the value guarantees that the change to memory is visible to the other processor in the order expected.
Most locking primitives like mutexes imply memory barriers. These force a cache flush and reload to occur.
For example,
ThreadA {
x = 5; // probably writes to cache
unlock mutex; // forcibly writes local CPU cache to global memory
}
ThreadB {
lock mutex; // discards data in local cache
y = x; // x must read from global memory
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With