Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is this a correct use of Thread.MemoryBarrier()?

Assume I have a field that controls execution of some loop:

private static bool shouldRun = true;

And I have a thread running, that has code like:

while(shouldRun) 
{
    // Do some work ....
    Thread.MemoryBarrier();
}

Now, another thread might set shouldRun to false, without using any synchronization mechanism.

As far as I understand Thread.MemoryBarrier(), having this call inside the while loop will prevent my work thread from getting a cached version of the shouldRun, and effectively preventing an infinite loop from happening.

Is my understanding about Thread.MemoryBarrier correct ? Given I have threads that can set the shouldRun variable (this can't easily be changed), is this a reasonable way to ensure that my loop will stop once shouldRun is set to false by any thread ?

like image 785
driis Avatar asked Jan 04 '12 15:01

driis


People also ask

What is memory barrier C#?

In computing, a memory barrier, also known as a membar, memory fence or fence instruction, is a type of barrier instruction that causes a central processing unit (CPU) or compiler to enforce an ordering constraint on memory operations issued before and after the barrier instruction.

How does memory barrier work?

The memory barrier instructions halt execution of the application code until a memory write of an instruction has finished executing. They are used to ensure that a critical section of code has been completed before continuing execution of the application code.


2 Answers

Is this a correct use of Thread.MemoryBarrier()?

No. Suppose one thread sets the flag before the loop even begins to execute. The loop could still execute once, using a cached value of the flag. Is that correct? It certainly seems incorrect to me. I would expect that if I set the flag before the first execution of the loop, that the loop executes zero times, not once.

As far as I understand Thread.MemoryBarrier(), having this call inside the while loop will prevent my work thread from getting a cached version of the shouldRun, and effectively preventing an infinite loop from happening. Is my understanding about Thread.MemoryBarrier correct?

The memory barrier will ensure that the processor does not do any reorderings of reads and writes such that a memory access that is logically before the barrier is actually observed to be after it, and vice versa.

If you are hell bent on doing low-lock code, I would be inclined to make the field volatile rather than introducing an explicit memory barrier. "volatile" is a feature of the C# language. A dangerous and poorly understood feature, but a feature of the language. It clearly communicates to the reader of the code that the field in question is going to be used without locks on multiple threads.

is this a reasonable way to ensure that my loop will stop once shouldRun is set to false by any thread?

Some people would consider it reasonable. I would not do this in my own code without a very, very good reason.

Typically low-lock techniques are justified by performance considerations. There are two such considerations:

First, a contended lock is potentially extremely slow; it blocks as long as there is code executing in the lock. If you have a performance problem because there is too much contention then I would first try to solve the problem by eliminating the contention. Only if I could not eliminate the contention would I go to a low-lock technique.

Second, it might be that an uncontended lock is too slow. If the "work" you are doing in the loop takes, say, less that 200 nanoseconds then the time required to check the uncontended lock -- about 20 ns -- is a significant fraction of the time spent doing work. In that case I would suggest that you do more work per loop. Is it really necessary that the loop stops within 200 ns of the control flag being set?

Only in the most extreme of performance scenarios would I imagine that the cost of checking an uncontended lock is a significant fraction of the time spent in the program.

And also, of course, if you are inducing a memory barrier every 200 ns or so, you are also possibly wrecking performance in other ways. The processor wants to make those moving-memory-accesses-around-in-time optimizations for you; if you are forcing it to constantly abandon those optimizations, you're missing out on a potential win.

like image 53
Eric Lippert Avatar answered Oct 26 '22 16:10

Eric Lippert


I believe your understanding is a bit off on this line

As far as I understand Thread.MemoryBarrier(), having this call inside the while loop will prevent my work thread from getting a cached version of the shouldRun, and effectively preventing an infinite loop from happening.

A memory barrier is a way of enforcing an ordering constraint on read / write instructions. While the results of read / write reordering can have the appearance of caching a memory barrier doesn't actually effect caching in any way. It simply acts as a fence over which read and write instructions can't cross.

In all probability this won't prevent an infinite loop. What the memory fence is doing is this scenario is forcing all of the reads and writes which happen in the body of the loop to occur before the value of shouldRun is read by the loop conditional

Wikipedia has a nice walk through on memory barrier that you may find useful

like image 28
JaredPar Avatar answered Oct 26 '22 17:10

JaredPar