To implement a lock free code for multithreading application I used volatile
variables,
Theoretically: The volatile
keyword is simply used to make sure that all threads see the most updated value of a volatile variable; so if thread A
updates the variable value and thread B
read that variable just after that update is happened it will see the most updated value that written recently from thread A.
As I read in a C# 4.0 in a Nutshell book that
this is incorrect because
applying volatile doesn’t prevent a write followed by a read from being swapped.
Could this problem being solved by putting Thread.MemoryBarrier()
before every get of the volatile
variable like:
private volatile bool _foo = false;
private void A()
{
//…
Thread.MemoryBarrier();
if (_foo)
{
//do somthing
}
}
private void B()
{
//…
_foo = true;
//…
}
And if this solves the problem; consider we have a while loop that depend on that value at one of its conditions; is putting Thread.MemoryBarrier()
before the while loop is a correct way to fix the issue? example:
private void A()
{
Thread.MemoryBarrier();
while (_someOtherConditions && _foo)
{
// do somthing.
}
}
To be more accurate I want the _foo
variable to give its most fresh value when any thread asking for it at any time; so if inserting Thread.MemoryBarrier()
before calling the variable fixes the issue then could I use Foo
property instead of _foo
and do a Thread.MemoryBarrier()
within the get of that property Like:
Foo
{
get
{
Thread.MemoryBarrier();
return _foo;
}
set
{
_foo = value;
}
}
The "C# In a Nutshell" is correct, but its statement is moot. Why?
Let's clarify. Take your original code:
private void A()
{
//…
if (_foo)
{
//do something
}
}
What happens if the thread scheduler has already checked the _foo
variable, but it gets suspended just before the //do something
comment? Well, at that point your other thread could change the value of _foo
, which means that all your volatiles and Thread.MemoryBarriers counted for nothing!!! If it is absolutely essential that the do_something
be avoided if the value of _foo
is false, then you have no choice but to use a lock.
However, if it is ok for the do something
to be executing when suddenly _foo
becomes false, then it means the volatile keyword was more than enough for your needs.
To be clear: all the responders who are telling you to use a memory barrier are incorrect or are providing overkill.
The book is correct.
The CLR's memory model indicates that load and store operations may be reordered. This goes for volatile and non-volatile variables.
Declaring a variable as volatile
only means that load operations will have acquire semantics, and store operations will have release semantics. Also, the compiler will avoid performing certain optimizations that relay on the fact that the variable is accessed in a serialized, single-threaded fashion (e.g. hoisting load/stores out of loops).
Using the volatile
keyword alone doesn't create critical sections, and it doesn't cause threads to magically synchronize with each other.
You should be extremely careful when you write lock free code. There's nothing simple about it, and even the experts have trouble to get it right.
Whatever is the original problem you're trying to solve, it's likely that there's a much more reasonable way to do it.
In your second example, you would need to also put a Thread.MemoryBarrier();
inside the loop, to make sure you get the most recent value every time you check the loop condition.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With