I took the code of DCL from Joe Duffy's book 'Concurrent programming on windows'
class LazyInit<T> where T : class
{
private volatile T m_value;
private object m_sync = new object();
private Func<T> m_factory;
public LazyInit(Func<T> factory) { m_factory = factory; }
public T value
{
get
{
if (m_value == null)
{
lock (m_sync)
{
if (m_value == null)
{
m_value = m_factory();
}
}
}
return m_value;
}
}
}
it is said marking m_value volatile can prevent writes reordering that will leads to other threads getting 'non null object with uninitialized fields'. If the problem happens just because the possible writes reordering, can I just use 'Volatile Write' instead of marking the filed volatile, like below? (This code looks a little awkward for demonstration, I just want to make sure if we can only use volatile write instead)
class LazyInit<T> where T : class
{
private object m_value;
private object m_sync = new object();
private Func<T> m_factory;
public LazyInit(Func<T> factory) { m_factory = factory; }
public T value
{
get
{
if (m_value == null)
{
lock (m_sync)
{
if (m_value == null)
{
Thread.VolatileWrite(ref m_value, m_factory());
}
}
}
return (T)m_value;
}
}
}
A related question is the Interlocked version from the book
class LazylnitRelaxedRef<T> where T : class
{
private volatile T m_value;
private Func<T> m_factory;
public LazylnitRelaxedRef(Func<T> factory) { m_factory = factory; }
public T Value
{
get
{
if (m_value == null)
Interlocked.CompareExchange(ref m_value, m_factory(), null);
return m_value;
}
}
}
Since the ECMA-CLI specs the 'Interlocked operation perform implicit acquire/release operations', do we still need volatile in this case?
Double-checked locking is a pattern meant for reducing the overhead of locking. First, the locking condition is checked without synchronization. And only if the condition is met, the thread will try to get a lock. Thus, the locking would be executed only if it was really necessary.
Double-Checked Locking is widely cited and used as an efficient method for implementing lazy initialization in a multithreaded environment. Unfortunately, it will not work reliably in a platform independent way when implemented in Java, without additional synchronization.
The volatile keyword indicates that a field might be modified by multiple threads that are executing at the same time. The compiler, the runtime system, and even hardware may rearrange reads and writes to memory locations for performance reasons.
In software engineering, double-checked locking (also known as "double-checked locking optimization") is a software design pattern used to reduce the overhead of acquiring a lock by testing the locking criterion (the "lock hint") before acquiring the lock.
First, messing with volatile is really hard, so don't get too loose with it! But, here is a really close answer to your question, and here is an article that I think everyone should read before using the keyword volatile
, and definitely before starting to use VolatileRead
, VolatileWrite
and MemoryBarrier
.
The answer in the first link is: no you don't need to use volatile, you just need to use System.Threading.Thread.MemoryBarrier()
RIGHT BEFORE you assign the new value. This is because the release_fence
implied when using the volatile keyword makes sure that it gets finished writing out to the main memory, and that no read/write operations can be performed until it's finished.
So, what does Thread.VolatileWrite() do, and does it perform the same functions that we get from the 'volatile' keyword? Well, here's the full code from this function:
public static void VolatileWrite (ref int address, int value)
{
MemoryBarrier(); address = value;
}
Yes, it calls MemoryBarrier right before it assigns your value, which is sufficient!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With