Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can a read instruction after an unrelated lock statement be moved before the lock?

This question is a follow-up to comments in this thread.

Let's assume we have the following code:

// (1)
lock (padlock)
{
    // (2)
}
var value = nonVolatileField; // (3)

Furthermore, let's assume that no instruction in (2) has any effect on the nonVolatileField and vice versa.

Can the reading instruction (3) be reordered in such a way that in ends up before the lock statement (1) or inside it (2)?

As far as I can tell, nothing in the C# Specification (§3.10) and the CLI Specification (§I.12.6.5) prohibits such reordering.

Please note that this is not the same question as this one. Here I am asking specifically about read instructions, because as far as I understand, they are not considered side-effects and have weaker guarantees.

like image 467
tearvisus Avatar asked Dec 06 '16 08:12

tearvisus


2 Answers

I believe this is partially guaranteed by the CLI spec, although it's not as clear as it might be. From I.12.6.5:

Acquiring a lock (System.Threading.Monitor.Enter or entering a synchronized method) shall implicitly perform a volatile read operation, and releasing a lock (System.Threading.Monitor.Exit or leaving a synchronized method) shall implicitly perform a volatile write operation. See §I.12.6.7.

Then from I.12.6.7:

A volatile read has “acquire semantics” meaning that the read is guaranteed to occur prior to any references to memory that occur after the read instruction in the CIL instruction sequence. A volatile write has “release semantics” meaning that the write is guaranteed to happen after any memory references prior to the write instruction in the CIL instruction sequence.

So entering the lock should prevent (3) from moving to (1). Reading from nonVolatileField still counts as a "reference to memory", I believe. However, the read could still be performed before the volatile write when the lock exits, so it could still be moved to (2).

The C#/CLI memory model leaves a lot to be desired at the moment. I'm hoping that the whole thing can be clarified significantly (and probably tightened up, to make some "theoretically valid but practically awful" optimizations invalid).

like image 74
Jon Skeet Avatar answered Oct 03 '22 08:10

Jon Skeet


As far as .NET is concerned, entering a monitor (the lock statement) has acquire semantics, as it implicitly performs a volatile read, and exiting a monitor (the end of the lock block) has release semantics, as it implicitly performs a volatile write (see §12.6.5 Locks and Threads in Common Language Infrastructure (CLI) Partition I).

volatile bool areWeThereYet = false;

// In thread 1
// Accesses, usually writes: create objects, initialize them
areWeThereYet = true;

// In thread 2
if (areWeThereYet)
{
    // Accesses, usually reads: use created and initialized objects
}

When you write a value to areWeThereYet, all accesses before it were performed and not reordered to after the volatile write.

When you read from areWeThereYet, subsequent accesses are not reordered to before the volatile read.

In this case, when thread 2 observes that areWeThereYet has changed, it has a guarantee that the following accesses, usually reads, will observe the other thread's accesses, usually writes. Assuming there is no other code messing with the affected variables.

As for other synchronization primitives in .NET, such as SemaphoreSlim, although not explicitly documented, it would be rather useless if they didn't have similar semantics. Programs based on them could, in fact, not even work correctly in platforms or hardware architectures with a weaker memory model.


Many people share the thought that Microsoft ought to enforce a strong memory model on such architectures, similar to x86/amd64 as to keep the current code base (Microsoft's own and those of their clients) compatible.

I cannot verify myself, as I don't have an ARM device with Microsoft Windows, much less with .NET Framework for ARM, but at least one MSDN magazine article by Andrew Pardoe, CLR - .NET Development for ARM Processors, states:

The CLR is allowed to expose a stronger memory model than the ECMA CLI specification requires. On x86, for example, the memory model of the CLR is strong because the processor’s memory model is strong. The .NET team could’ve made the memory model on ARM as strong as the model on x86, but ensuring the perfect ordering whenever possible can have a notable impact on code execution performance. We’ve done targeted work to strengthen the memory model on ARM—specifically, we’ve inserted memory barriers at key points when writing to the managed heap to guarantee type safety—but we’ve made sure to only do this with a minimal impact on performance. The team went through multiple design reviews with experts to make sure that the techniques applied in the ARM CLR were correct. Moreover, performance benchmarks show that .NET code execution performance scales the same as native C++ code when compared across x86, x64 and ARM.

like image 32
acelent Avatar answered Oct 03 '22 08:10

acelent