Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is Interlocked.CompareExchange really faster than a simple lock?

I came across a ConcurrentDictionary implementation for .NET 3.5 (I'm so sorry I could find the link right now) that uses this approach for locking:

var current = Thread.CurrentThread.ManagedThreadId;
while (Interlocked.CompareExchange(ref owner, current, 0) != current) { }

// PROCESS SOMETHING HERE

if (current != Interlocked.Exchange(ref owner, 0))
        throw new UnauthorizedAccessException("Thread had access to cache even though it shouldn't have.");

Instead of the traditional lock:

lock(lockObject)
{
    // PROCESS SOMETHING HERE
}

The question is: Is there any real reason for doing this? Is it faster or have some hidden benefit?

PS: I know there's a ConcurrentDictionary in some latest version of .NET but I can't use for a legacy project.

Edit:

In my specific case, what I'm doing is just manipulating an internal Dictionary class in such a way that it's thread safe.

Example:

public bool RemoveItem(TKey key)
{
    // open lock
    var current = Thread.CurrentThread.ManagedThreadId;
    while (Interlocked.CompareExchange(ref owner, current, 0) != current) { }


    // real processing starts here (entries is a regular `Dictionary` class.
    var found = entries.Remove(key);


    // verify lock
    if (current != Interlocked.Exchange(ref owner, 0))
        throw new UnauthorizedAccessException("Thread had access to cache even though it shouldn't have.");
    return found;
}

As @doctorlove suggested, this is the code: https://github.com/miensol/SimpleConfigSections/blob/master/SimpleConfigSections/Cache.cs

like image 475
André Pena Avatar asked Dec 27 '13 12:12

André Pena


2 Answers

There is no definitive answer to your question. I would answer: it depends.

What the code you've provided is doing is:

  1. wait for an object to be in a known state (threadId == 0 == no current work)
  2. do work
  3. set back the known state to the object
  4. another thread now can do work too, because it can go from step 1 to step 2

As you've noted, you have a loop in the code that actually does the "wait" step. You don't block the thread until you can access to your critical section, you just burn CPU instead. Try to replace your processing (in your case, a call to Remove) by Thread.Sleep(2000), you'll see the other "waiting" thread consuming all of one of your CPUs for 2s in the loop.

Which means, which one is better depends on several factors. For example: how many concurrent accesses are there? How long the operation takes to complete? How many CPUs do you have?

I would use lock instead of Interlocked because it's way easier to read and maintain. The exception would be the case you've got a piece of code called millions of times, and a particular use case you're sure Interlocked is faster.

So you'll have to measure by yourself both approaches. If you don't have time for this, then you probably don't need to worry about performances, and you should use lock.

like image 75
ken2k Avatar answered Oct 08 '22 00:10

ken2k


Your CompareExchange sample code doesn't release the lock if an exception is thrown by "PROCESS SOMETHING HERE".

For this reason as well as the simpler, more readable code, I would prefer the lock statement.

You could rectify the problem with a try/finally, but this makes the code even uglier.

The linked ConcurrentDictionary implementation has a bug: it will fail to release the lock if the caller passes a null key, potentially leaving other threads spinning indefinitely.

As for efficiency, your CompareExchange version is essentially a Spinlock, which can be efficient if threads are only likely to be blocked for short periods. But inserting into a managed dictionary can take a relatively long time, since it may be necessary to resize the dictionary. Therefore, IMHO, this isn't a good candidate for a spinlock - which can be wasteful, especially on single-processor system.

like image 6
Joe Avatar answered Oct 07 '22 22:10

Joe