The common solution to preventing deadlock in code is to make sure the sequence of locking occur in a common manner regardless of which thread is accessing the resources.
For example given threads T1 and T2, where T1 accesses resource A and then B and T2 accesses resource B and then A. Locking the resources in the order they are needed causes a dead-lock. The simple solution is to lock A and then lock B, regardless of the order specific thread will use the resources.
Problematic situation:
Thread1 Thread2 ------- ------- Lock Resource A Lock Resource B Do Resource A thing... Do Resource B thing... Lock Resource B Lock Resource A Do Resource B thing... Do Resource A thing...
Possible Solution:
Thread1 Thread2 ------- ------- Lock Resource A Lock Resource A Lock Resource B Lock Resource B Do Resource A thing... Do Resource B thing... Do Resource B thing... Do Resource A thing...
My question is what other techniques, patterns or common practices are used in coding to guarantee dead lock prevention?
Avoid deadlock by requesting and releasing locks in the same order. To avoid data corruption in multithreaded Java programs, shared data must be protected from concurrent modifications and accesses. Locking can be performed at the object level using synchronized methods, synchronized blocks, or the java.
One of the most common ways of avoiding a deadlock is to always lock the two mutexes in the same order. If we always lock mutex A before mutex B, then we'll never have a deadlock.
The technique you describe isn't just common: it's the one technique that has been proven to work all the time. There are a few other rules you should follow when coding threaded code in C++, though, among which the most important may be:
I could go on for a while, but in my experience, the easiest way to work with threads is using patterns that are well-known to everyone who might work with the code, such as the producer/consumer pattern: it's easy to explain and you only need one tool (a queue) to allow your threads to communicate with each other. After all, the only reason for two threads to be synchronized with each other, is to allow them to communicate.
More general advice:
#include <thread> #include <cassert> #include <chrono> #include <iostream> #include <mutex> void nothing_could_possibly_go_wrong() { int flag = 0; std::condition_variable cond; std::mutex mutex; int done = 0; typedef std::unique_lock<std::mutex> lock; auto const f = [&] { if(flag == 0) ++flag; lock l(mutex); ++done; cond.notify_one(); }; std::thread threads[2] = { std::thread(f), std::thread(f) }; threads[0].join(); threads[1].join(); lock l(mutex); cond.wait(l, [done] { return done == 2; }); // surely this can't fail! assert( flag == 1 ); } int main() { for(;;) nothing_could_possibly_go_wrong(); }
Consistent ordering of locking is pretty much the first and last word when it comes to deadlock avoidance.
There are related techniques, such as lockless programming (where no thread ever waits on a lock, and thus there is no possibility of a cycle), but that's really just a special case of the "avoid inconsistent locking order" rule -- i.e. they avoid inconsistent locking by avoiding all locking. Unfortunately, lockless programming has its own issues, so it's not a panacea either.
If you want to broaden the scope a bit, there are methods for detecting deadlocks when they do occur (if for some reason you can't design your program to avoid them), and ways for breaking deadlocks when they do occur (e.g. by always locking with a timeout, or by forcing one of the deadlocked threads to have their Lock() command fail, or even just by killing one of the deadlocked threads); but I think they are all pretty inferior to simply making sure deadlocks cannot happen in the first place.
(btw if you want an automated way to check whether your program has potential deadlocks in it, check out valgrind's helgrind tool. It will monitor your code's locking patterns and notify you of any inconsistencies -- very useful)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With