Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does a MemoryBarrier guarantee memory visibility for all memory?

If I understand correctly, in C#, a lock block guarantees exclusive access to a set of instructions, but it also guarantees that any reads from memory reflect the latest version of that memory in any CPU cache. We think of lock blocks as protecting the variables read and modified within the block, which means:

  1. Assuming you've properly implemented locking where necessary, those variables can only be read and written to by one thread at a time, and
  2. Reads within the lock block see the latest versions of a variable and writes within the lock block become visible to all threads.

(Right?)

This second point is what interests me. Is there some magic by which only variables read and written in code protected by the lock block are guaranteed fresh, or do the memory barriers employed in the implementation of lock guarantee that all memory is now equally fresh for all threads? Pardon my mental fuzziness here about how caches work, but I've read that caches hold several multi-byte "lines" of data. I think what I'm asking is, does a memory barrier force synchronization of all "dirty" cache lines or just some, and if just some, what determines which lines get synchronized?

like image 798
adv12 Avatar asked Dec 08 '22 21:12

adv12


1 Answers

If I understand correctly, in C#, a lock block guarantees exclusive access to a set of instructions...

Right. The specification guarantees that.

but it also guarantees that any reads from memory reflect the latest version of that memory in any CPU cache.

The C# specification says nothing whatsoever about "CPU cache". You've left the realm of what is guaranteed by the specification, and entered the realm of implementation details. There is no requirement that an implementation of C# execute on a CPU that has any particular cache architecture.

Is there some magic by which only variables read and written in code protected by the lock block are guaranteed fresh, or do the memory barriers employed in the implementation of lock guarantee that all memory is now equally fresh for all threads?

Rather than try to parse your either-or question, let's say what is actually guaranteed by the language. A special effect is:

  • Any write to a variable, volatile or not
  • Any read of a volatile field
  • Any throw

The order of special effects is preserved at certain special points:

  • Reads and writes of volatile fields
  • locks
  • thread creation and termination

The runtime is required to ensure that special effects are ordered consistently with special points. So, if there is a read of a volatile field before a lock, and a write after, then the read can't be moved after the write.

So, how does the runtime achieve this? Beats the heck out of me. But the runtime is certainly not required to "guarantee that all memory is fresh for all threads". The runtime is required to ensure that certain reads, writes and throws happen in chronological order with respect to special points, and that's all.

The runtime is in particular not required that all threads observe the same order.

Finally, I always end these sorts of discussions by pointing you here:

http://blog.coverity.com/2014/03/26/reordering-optimizations/

After reading that, you should have an appreciation for the sorts of horrid things that can happen even on x86 when you act casual about eliding locks.

like image 125
Eric Lippert Avatar answered Dec 22 '22 00:12

Eric Lippert