Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the difference between mutex and critical section?

People also ask

What is critical section in mutex?

A critical section object provides synchronization similar to that provided by a mutex object, except that a critical section can be used only by the threads of a single process. Critical section objects cannot be shared across processes.

What is the difference between critical section and critical region?

Critical section: In Win32 critical section is a simple data structure ( CRITICAL_SECTION ) used to build critical regions. Critical region: is a code region that enjoys mutual exclusion (this seems to be what you're referring to as a critical section in the above).

What is the difference between a mutex and a semaphore?

A mutex object allows multiple process threads to access a single shared resource but only one at a time. On the other hand, semaphore allows multiple process threads to access the finite instance of the resource until available. In mutex, the lock can be acquired and released by the same process at a time.

What is the difference between a mutex and a lock?

A lock allows only one thread to enter the part that's locked and the lock is not shared with any other processes. A mutex is the same as a lock but it can be system wide (shared by multiple processes).


For Windows, critical sections are lighter-weight than mutexes.

Mutexes can be shared between processes, but always result in a system call to the kernel which has some overhead.

Critical sections can only be used within one process, but have the advantage that they only switch to kernel mode in the case of contention - Uncontended acquires, which should be the common case, are incredibly fast. In the case of contention, they enter the kernel to wait on some synchronization primitive (like an event or semaphore).

I wrote a quick sample app that compares the time between the two of them. On my system for 1,000,000 uncontended acquires and releases, a mutex takes over one second. A critical section takes ~50 ms for 1,000,000 acquires.

Here's the test code, I ran this and got similar results if mutex is first or second, so we aren't seeing any other effects.

HANDLE mutex = CreateMutex(NULL, FALSE, NULL);
CRITICAL_SECTION critSec;
InitializeCriticalSection(&critSec);

LARGE_INTEGER freq;
QueryPerformanceFrequency(&freq);
LARGE_INTEGER start, end;

// Force code into memory, so we don't see any effects of paging.
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
    EnterCriticalSection(&critSec);
    LeaveCriticalSection(&critSec);
}

QueryPerformanceCounter(&end);

int totalTimeCS = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);

// Force code into memory, so we don't see any effects of paging.
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);

QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
    WaitForSingleObject(mutex, INFINITE);
    ReleaseMutex(mutex);
}

QueryPerformanceCounter(&end);

int totalTime = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);

printf("Mutex: %d CritSec: %d\n", totalTime, totalTimeCS);

From a theoretical perspective, a critical section is a piece of code that must not be run by multiple threads at once because the code accesses shared resources.

A mutex is an algorithm (and sometimes the name of a data structure) that is used to protect critical sections.

Semaphores and Monitors are common implementations of a mutex.

In practice there are many mutex implementation availiable in windows. They mainly differ as consequence of their implementation by their level of locking, their scopes, their costs, and their performance under different levels of contention. See CLR Inside Out - Using concurrency for scalability for an chart of the costs of different mutex implementations.

Availiable synchronization primitives.

  • Monitor
  • Mutex
  • Semaphore
  • ReaderWriterLock
  • ReaderWriterLockSlim
  • Interlocked

The lock(object) statement is implemented using a Monitor - see MSDN for reference.

In the last years much research is done on non-blocking synchronization. The goal is to implement algorithms in a lock-free or wait-free way. In such algorithms a process helps other processes to finish their work so that the process can finally finish its work. In consequence a process can finish its work even when other processes, that tried to perform some work, hang. Usinig locks, they would not release their locks and prevent other processes from continuing.


In addition to the other answers, the following details are specific to critical sections on windows:

  • in the absence of contention, acquiring a critical section is as simple as an InterlockedCompareExchange operation
  • the critical section structure holds room for a mutex. It is initially unallocated
  • if there is contention between threads for a critical section, the mutex will be allocated and used. The performance of the critical section will degrade to that of the mutex
  • if you anticipate high contention, you can allocate the critical section specifying a spin count.
  • if there is contention on a critical section with a spin count, the thread attempting to acquire the critical section will spin (busy-wait) for that many processor cycles. This can result in better performance than sleeping, as the number of cycles to perform a context switch to another thread can be much higher than the number of cycles taken by the owning thread to release the mutex
  • if the spin count expires, the mutex will be allocated
  • when the owning thread releases the critical section, it is required to check if the mutex is allocated, if it is then it will set the mutex to release a waiting thread

In linux, I think that they have a "spin lock" that serves a similar purpose to the critical section with a spin count.


Critical Section and Mutex are not Operating system specific, their concepts of multithreading/multiprocessing.

Critical Section Is a piece of code that must only run by it self at any given time (for example, there are 5 threads running simultaneously and a function called "critical_section_function" which updates a array... you don't want all 5 threads updating the array at once. So when the program is running critical_section_function(), none of the other threads must run their critical_section_function.

mutex* Mutex is a way of implementing the critical section code (think of it like a token... the thread must have possession of it to run the critical_section_code)