Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is this multithreaded Singleton more efficient?

I have a high-throughput Singleton in a multithreaded environment. Usually I'd do something like this:

public static Foo GetInstance()
{
    lock (Foo._syncLock)
    {
        if (Foo._instance == null)
            Foo._instance = new Foo();
        return Foo._instance;
    }
}

I'm wondering if doing the following instead would be more efficient since it would avoid the continual thread locking, or are there hidden problems with it?

public static Foo GetInstance()
{
    if (Foo._instance != null)
        return Foo._instance;
    lock (Foo._syncLock)
    {
        if (Foo._instance == null)
            Foo._instance = new Foo();
        return Foo._instance;
    }
}
like image 300
Nick Gotch Avatar asked Jan 12 '23 15:01

Nick Gotch


1 Answers

I'm wondering if doing the following instead would be more efficient since it would avoid the continual thread locking, or are there hidden problems with it?

Your question is "would there be a performance gain by going to a dangerous low-lock pattern?" That is the completely wrong question to ask. Never reason this way! That way lies wasted time, wasted effort, and crazy, impossible-to-debug bugs.

The correct question is "Do my measurements strongly indicate I have a performance problem to begin with?".

If the answer is "no" then you're done.

Only if the answer is "yes" should you ask the next question, which is "Can I eliminate my performance problem by eliminating contention on the lock?"

If the answer is "yes" then eliminate contention on the lock and go back to the first question.

Only if the answer is "no" should you then ask the next question, which is "Will going to a low-lock solution give acceptable performance?"

Note that for the answer to this question to be "yes" you must be in a situation where the ten-nanosecond penalty imposed by the uncontended lock is the gating factor on your performance. Very few people are in a position where ten or twenty nanoseconds is too long.

In the incredibly unlikely event that the answer is "yes", you should go on to the next question, which is "Does a correct implementation of double-checked locking eliminate my performance problem?"

If double-checked locking isn't fast enough then implementing it is a non-starter. You'll have to solve your problem some other way.

Only if the answer to that question is "yes" should you implement double-checked locking.

Now let's come to your actual question:

are there hidden problems with it?

Your implementation is correct. However, the moment you stray from the blessed pattern, all bets are off. For example:

static object sync = new object();
static bool b = false;
static int x = 0;
static int GetIt()
{
  if (!b)
  {
    lock(sync)
    {
      if (!b)
      {
        b = true;
        x = ExpensiveComputation();
      }
    }
  }
  return x;
}

That looks correct, right? But it is not correct! Consider the low-lock path. Since there is no barrier on this path, the value of x can be pre-fetched as zero on one thread, and then another thread can run and set b to true and x to 123, and then the original thread can fetch b, get true, and return the pre-fetched x.

So what are the solutions? In my preference order they are:

  • Don't be lazy. Initialize the static field once and be done with it.
  • Use the blessed lazy singleton pattern documented on Jon Skeet's web site.
  • Use Lazy<T>.
  • Use single-checked locking.
  • If you don't care whether the singleton is in rare situations created twice and one is discarded, use InterlockedCompareExchange.
  • Use a blessed double-checked locking pattern.
like image 66
Eric Lippert Avatar answered Jan 21 '23 19:01

Eric Lippert