I know that I can create an immutable (i.e. thread-safe) object like this:
class CantChangeThis
{
private readonly int value;
public CantChangeThis(int value)
{
this.value = value;
}
public int Value { get { return this.value; } }
}
However, I typically "cheat" and do this:
class CantChangeThis
{
public CantChangeThis(int value)
{
this.Value = value;
}
public int Value { get; private set; }
}
Then I got wondering, "why does this work?" Is it really thread-safe? If I use it like this:
var instance = new CantChangeThis(5);
ThreadPool.QueueUserWorkItem(() => doStuff(instance));
Then what it's really doing is (I think):
However, that instance value is stored in shared memory. The two threads might have cache-inconsistent views of that memory on the heap. What is it that makes sure the threadpool thread actually sees the constructed instance and not some garbage data? Is there an implicit memory barrier at the end of any object construction?
No... invert them. It is more similar to:
new
operator/keyword, var instance
(=
assignment operator)You can check this by throwing an exception in the constructor. The reference variable won't be assigned.
In general, you don't want another thread being able to see semi-initialized object (note that in the first version of Java this wasn't guaranteed... Java 1.0 had what is called a "weak" memory model). How is this obtained?
On Intel it is guaranteed:
The x86-x64 processor will not reorder two writes, nor will it reorder two reads.
This is quite important :-) and it guarantees that that problem won't happen. This guarantee isn't part of .NET or of ECMA C# but on Intel it is guaranteed from the processor, and on Itanium (an architecture without that guarantee), this was done by the JIT compiler (see same link). It seems that on ARM this isn't guaranteed (still same link). But I haven't seen anyone speaking of it.
in general, in the example give, this isn't important, because:
Nearly all the operations that relate to threads use full Memory Barrier (see Memory barrier generators). A full Memory Barrier guarantees that all write and read operations that are before the barrier are really executed before the barrier, and all the read/write operations that are after the barrier are executed after the barrier. The ThreadPool.QueueUserWorkItem
surely at a certain point uses one full Memory Barrier. And the starting thread must clearly start "fresh", so it can't have stale data (and by https://stackoverflow.com/a/10673256/613130, I'd say it is safe to assume you can rely on the implicit barrier.)
Note that Intel processors are naturally cache coherent... You have to disable cache coherency manually if you don't want it (see for example this question: https://software.intel.com/en-us/forums/topic/278286), so the only possible problems would be of a variable that is "cached" in a register or of a read that is anticipated or a write that is delayed (and both these "problems" are "fixed" by the use of full Memory Barrier)
addendum
Your two pieces of code are equivalent. Auto properties are simply an "hidden" field plus a boilerplate get
/set
that are respectively return hiddenfield;
and hiddenfield = value
. So if there was problem with v2 of the code, there would be the same problem with v1 of the code :-)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With