Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is thread contention?

Several answers seem to focus on lock contention, but locks are not the only resources on which contention can be experienced. Contention is simply when two threads try to access either the same resource or related resources in such a way that at least one of the contending threads runs more slowly than it would if the other thread(s) were not running.

The most obvious example of contention is on a lock. If thread A has a lock and thread B wants to acquire that same lock, thread B will have to wait until thread A releases the lock.

Now, this is platform-specific, but the thread may experience slowdowns even if it never has to wait for the other thread to release the lock! This is because a lock protects some kind of data, and the data itself will often be contended as well.

For example, consider a thread that acquires a lock, modifies an object, then releases the lock and does some other things. If two threads are doing this, even if they never fight for the lock, the threads may run much slower than they would if only one thread was running.

Why? Say each thread is running on its own core on a modern x86 CPU and the cores don't share an L2 cache. With just one thread, the object may remain in the L2 cache most of the time. With both threads running, each time one thread modifies the object, the other thread will find the data is not in its L2 cache because the other CPU invalidated the cache line. On a Pentium D, for example, this will cause the code to run at FSB speed, which is much less than L2 cache speed.

Since contention can occur even if the lock doesn't itself get contended for, contention can also occur when there is no lock. For example, say your CPU supports an atomic increment of a 32-bit variable. If one thread keeps incrementing and decrementing a variable, the variable will be hot in the cache much of the time. If two threads do it, their caches will contend for ownership of the memory holding that variable, and many accesses will be slower as the cache coherency protocol operates to secure each core ownership of the cache line.

Ironically, locks typically reduce contention. Why? Because without a lock, two threads could operate on the same object or collection and cause lots of contention (for example, there are lock free queues). Locks will tend to deschedule contending threads, allowing non-contending threads to run instead. If thread A holds a lock and thread B wants that same lock, the implementation can run thread C instead. If thread C doesn't need that lock, then future contention between threads A and B can be avoided for awhile. (Of course, this assumes there are other threads that could run. It won't help if the only way the system as a whole can make useful progress is by running threads that contend.)


Essentially thread contention is a condition where one thread is waiting for a lock/object that is currently being held by another thread. Therefore, this waiting thread cannot use that object until the other thread has unlocked that particular object.


From here:

A contention occurs when a thread is waiting for a resource that is not readily available; it slows the execution of your code, but can clear up over time.

A deadlock occurs when a thread is waiting for a resource that a second thread has locked, and the second thread is waiting for a resource that the first thread has locked. More than two threads can be involved in a deadlock. A deadlock never resolves itself. It often causes the whole application, or the part that is experiencing the deadlock, to halt.


I think there should be some clarification from the OP on the background of the question - I can think of 2 answers (though I'm sure there are additions to this list):

  1. if you are referring to the general "concept" of thread contention and how it can present itself in an application, I defer to @DavidSchwartz's detailed answer above.

  2. There is also the '.NET CLR Locks and Threads:Total # of Contentions' Performance Counter. As taken from PerfMon description for this counter, it is defined as:

    This counter displays the total number of times threads in the CLR have attempted to acquire a managed lock unsuccessfully. Managed locks can be acquired in many ways; by the "lock" statement in C# or by calling System.Monitor.Enter or by using MethodImplOptions.Synchronized custom attribute.

...and I'm sure others for other OS'es and application frameworks.