Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Double lock with volatile or memory barrier

This is the code I have in place, running for couple of months with no issues.

public sealed class Singleton 
{
    private static Singleton value;
    private static object syncRoot = new Object();
    public static Singleton Value 
    {
        get 
        {
            if (Singleton.value == null) 
            {
                lock (syncRoot) 
                {
                    if (Singleton.value == null) 
                    {
                        Singleton.value = new Singleton();
                    }
                }
            }
            return Singleton.value;
        }
    }      
}

However, I came across this link and it outlined issues with the above.

a) Write Singleton.value = new Singleton(); may get cached on the processor so the other thread may not end up seeing it. To fix this volatile keyword is used.

Q(1): Doesn't the C# lock keyword take care of this ?

b) Another better solution outlined, in the same article, then is to avoid volatile and introduce System.Threading.Thread.MemoryBarrier(); after the write to Singleton.value.

Question:

Q(2) I don't quite understand the need for MemoryBarrier() after the write. What possible re-ordering could potentially cause the other thread to see Singleton.value as null ? The lock prevents other threads from even reading anything.

Q(3) Barriers will just maintain order but what if the value is still read from some cache instead. Isn't volatile still required ?

Q(4) Is barrier really required there since C# lock itself places it ?

Finally, Do I need to update my code with either approach or is it good enough ?

Edits There is an answer proposed to use Lazy initialization. I got it.

But what were they trying to accomplish using volatie and memorybarrier that lock doesn't guarantee ?

like image 599
Frank Q. Avatar asked Dec 14 '22 20:12

Frank Q.


2 Answers

This is the code I have in place, running for couple of months with no issues.

If there's a one-in-a-billion chance of failure, and the code runs a thousand times a day on a thousand machines, that's a on average one impossible-to-debug critical failure every three years.

If it only fails on particular hardware, and you do all your testing on x86, then you'll never see the failure.

There is no such thing as testing for correctness of low-lock code. The code is either provably correct, or it's not. You can't rely on testing.

Doesn't the C# lock keyword take care of this?

The lock is elided on one of the reads.

The lock prevents other threads from even reading anything.

The lock is elided on one of the reads.

Barriers will just maintain order but what if the value is still read from some cache instead. Isn't volatile still required ?

Reading from a cache is equivalent to moving a read backwards in time; barriers induced by volatile or explicit barriers impose restrictions on how such backwards movements can be observed.

Is barrier really required there since C# lock itself places it ?

The lock is elided on one of the reads.

Do I need to update my code with either approach or is it good enough?

I would never, ever write code like this. If you need lazy initialization, use Lazy<T>. If you need a singleton, use the standard implementation of the singleton pattern. Don't solve these problems yourself; let experts solve these problems for you.

But what were they trying to accomplish using volatile and memorybarrier that lock doesn't guarantee ?

They were trying to correctly elide a lock, thereby saving a few nanoseconds in the non-contended path. How valuable are those nanoseconds to your user, compared with the costs of hard-to-debug rare critical failures?

Any time you try to elide a lock you are fully immersed in the crazy world of the low-level memory model. You have to assume that all memory is constantly changing unless something is keeping it still; you have to assume that any and all legal re-orderings of memory accesses are possible, even ones that are impossible on most hardware. You don't know what weird hardware is going to be invented in the future and used to run your code.

Don't go there. I don't like to use threads; if I want to parallelize something my preference is to throw virtual machines, containers, or processes at the problem. If you have to use threads, try to not share memory. If you have to share memory, use the highest level constructs built by experts, like Task and Lazy, rather than rolling your own out of memory barriers and interlocked operations.

like image 62
Eric Lippert Avatar answered Dec 16 '22 09:12

Eric Lippert


As others have mentioned, you can save yourself a lot of trouble by just using Lazy to generate your instance for you:

public sealed class Singleton 
{
    private static Lazy<Singleton> _value = new Lazy<Singleton>(() => new Singleton());

    public static Singleton Value => _value.Value;
}

As people much smarter than me have pointed out, most of the time this can be even more simplified through the use of a static initializer:

public sealed class Singleton 
{
    public static Singleton Value = new Singleton();
}

Refer to Eric Lippert's comments on my answer for the key difference between these approaches and what might help you make the decision between one or the other.

like image 21
Jesse Carter Avatar answered Dec 16 '22 09:12

Jesse Carter