Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is lock-free multithreaded programming?

I have seen people/articles/SO posts who say they have designed their own "lock-free" container for multithreaded usage. Assuming they haven't used a performance-hitting modulus trick (i.e. each thread can only insert based upon some modulo) how can data structures be multi-threaded but also lock-free???

This question is intended towards C and C++.

like image 587
user997112 Avatar asked Dec 23 '12 14:12

user997112


People also ask

What is a lock-free thread?

Lock-freedom allows individual threads to starve but guarantees system-wide throughput. An algorithm is lock-free if, when the program threads are run for a sufficiently long time, at least one of the threads makes progress (for some sensible definition of progress). All wait-free algorithms are lock-free.

Is lock-free always faster?

"For certain workloads" could also be interpreted as "for those workloads that can be synchronized with a lock free data structure". In other words they are always faster, but cannot be always applied.

What is a lock-free queue?

Lock-free queue is a queue applying to concurrency but without locking. When using lock-free queue, slow or stopped processes do not prevent other processes from accessing data in it. Lock-free queue has two main interfaces just like normal queue: Enqueue.

What is multithreaded thread programming?

What is a Thread in Programming? A thread is an independent unit of execution created within the context of a process (or application that is being executed). When multiple threads are executing in a process at the same time, we get the term “multithreading.” Think of it as the application's version of multitasking.


1 Answers

The key in lock-free programming is to use hardware-intrinsic atomic operations.

As a matter of fact, even locks themselves must use those atomic operations!

But the difference between locked and lock-free programming is that a lock-free program can never be stalled entirely by any single thread. By contrast, if in a locking program one thread acquires a lock and then gets suspended indefinitely, the entire program is blocked and cannot make progress. By contrast, a lock-free program can make progress even if individual threads are suspended indefinitely.

Here's a simple example: A concurrent counter increment. We present two versions which are both "thread-safe", i.e. which can be called multiple times concurrently. First the locked version:

int counter = 0; std::mutex counter_mutex;  void increment_with_lock() {     std::lock_guard<std::mutex> _(counter_mutex);     ++counter; } 

Now the lock-free version:

std::atomic<int> counter(0);  void increment_lockfree() {     ++counter; } 

Now imagine hundreds of thread all call the increment_* function concurrently. In the locked version, no thread can make progress until the lock-holding thread unlocks the mutex. By contrast, in the lock-free version, all threads can make progress. If a thread is held up, it just won't do its share of the work, but everyone else gets to get on with their work.

It is worth noting that in general lock-free programming trades throughput and mean latency throughput for predictable latency. That is, a lock-free program will usually get less done than a corresponding locking program if there is not too much contention (since atomic operations are slow and affect a lot of the rest of the system), but it guarantees to never produce unpredictably large latencies.

like image 196
Kerrek SB Avatar answered Oct 19 '22 02:10

Kerrek SB