Let's say I have two variables, protected_var1
and protected_var2
. Let's further assume that these variables are updated via multiple threads, and are fairly independent in that usually one or the other but not both is worked on - so they both have their own mutex guard for efficiency.
Assuming:
-I always lock mutexes in order (mutex1 then mutex2) in my code in regions where both locks are required.
-Both mutexes are used many other places by them selves (like just lock mutex1, or just lock mutex2).
Does the order in which I unlock the mutexes at the end of a function using both make a difference in this situation?
void foo()
{
pthread_mutex_lock(&mutex1);
pthread_mutex_lock(&mutex2);
int x = protected_var1 + protected_var2;
pthread_mutex_unlock(&mutex1); //Does the order of the next two lines matter?
pthread_mutex_unlock(&mutex2);
}
I was asked a question a long time ago in an interview regarding this situation, and I came out feeling that the answer was yes - the order of those two unlocks does matter. I cannot for the life of me figure out how a deadlock could result from this though if the locks are always obtained in the same order wherever both are used.
The order shouldn't matter, as long as you don't attempt to acquire another lock between the releases. The important thing is to always acquire the locks in the same order; otherwise, you risk a deadlock.
EDIT:
To expand on the constraint: You must establish a strict ordering among
the mutexes, e.g. mutex1
precedes mutex2
(but this rule is valid for
any number of mutexes). You may only request a lock on a mutex if you
don't hold a mutex which comes after it in the order; e.g. you may not
request a lock on mutex1
if you hold a lock on mutex2
. Anytime
these rules are respected, you should be safe. With regards to
releasing, if you release mutex1
, then try to reacquire it before
releasing mutex2
, you've violated the rule. In this regard, there may
be some advantage in respecting a stack-like order: last acquired is
always the first released. But it's sort of an indirect effect: the
rule is that you cannot request a lock on mutex1
if you hold one on
mutex2
. Regardless of whether you had a lock on mutex1
when you
acquired the lock on mutex2
or not.
It doesn't matter for correctness of locking. The reason is that, even supposing some other thread is waiting to lock mutex1 and then mutex2, the worst case is that it gets immediately scheduled as soon as you release mutex1 (and acquires mutex1). It then blocks waiting for mutex2, which the thread you're asking about will release as soon as it gets scheduled again, and there's no reason that shouldn't happen soon (immediately, if these are the only two threads in play).
So there might be a small cost in performance in that exact situation, compared with if you released mutex2 first and so there was only one rescheduling operation. Nothing you'd normally expect to predict or worry about, though, it's all within the boundaries of "scheduling often isn't deterministic".
The order you release the locks could certainly affect scheduling in general, though. Suppose that there are two threads waiting for your thread, and one of them is blocked on mutex1 while the other is blocked on mutex2. It might turn out that whichever lock you release first, that thread gets to run first, simply because your thread has outlived its welcome (consumed more than an entire time-slice), and hence gets descheduled as soon as anything else is runnable. But that can't cause a fault in an otherwise-correct program: you aren't allowed to rely on your thread being descheduled as soon as it releases the first lock. So either order of those two waiting threads running, both running simultaneously if you have multiple cores, or the two alternating on one core, must all be equally safe whichever order you release the locks.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With