Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Race condition in glibc/NPTL/Linux robust mutexes?

Tags:

c

linux

pthreads

In a comment on the question Automatically release mutex on crashes in Unix back in 2010, jilles claimed:

glibc's robust mutexes are so fast because glibc takes dangerous shortcuts. There is no guarantee that the mutex still exists when the kernel marks it as "will cause EOWNERDEAD". If the mutex was destroyed and the memory replaced by a memory mapped file that happens to contain the last owning thread's ID at the right place and the last owning thread terminates just after writing the lock word (but before fully removing the mutex from its list of owned mutexes), the file is corrupted. Solaris and will-be-FreeBSD9 robust mutexes are slower because they do not want to take this risk.

I can't make any sense of the claim, since destroying a mutex is not legal unless it's unlocked (and thus not in any thread's robust list). I also can't find any references searching for such a bug/issue. Was the claim simply erroneous?

The reason I ask and that I'm interested is that this is relevant to the correctness of my own implementation built upon the same Linux robust-mutex primitive.

like image 397
R.. GitHub STOP HELPING ICE Avatar asked Aug 14 '12 03:08

R.. GitHub STOP HELPING ICE


2 Answers

I think I found the race, and it is indeed very ugly. It goes like this:

Thread A has held the robust mutex and unlocks it. The basic procedure is:

  1. Put it in the "pending" slot of the thread's robust list header.
  2. Remove it from the linked list of robust mutexes held by the current thread.
  3. Unlock the mutex.
  4. Clear the "pending" slot of the thread's robust list header.

The problem is that between steps 3 and 4, another thread in the same process could obtain the mutex, then unlock it, and (rightly) believing itself to be the final user of the mutex, destroy and free/munmap it. After that, if any thread in the process creates a shared mapping of a file, device, or shared memory and it happens to get assigned the same address, and the value at that location happens to match the pid of the thread that's still between steps 3 and 4 of unlocking, you have a situation whereby, if the process is killed, the kernel will corrupt the mapped file by setting the high bit of a 32-bit integer it thinks is the mutex owner id.

The solution is to hold a global lock on mmap/munmap between steps 2 and 4 above, exactly the same as in my solution to the barrier issue described in my answer to this question:

Can a correct fail-safe process-shared barrier be implemented on Linux?

like image 129
R.. GitHub STOP HELPING ICE Avatar answered Oct 19 '22 08:10

R.. GitHub STOP HELPING ICE


The description of the race by FreeBSD pthread developer David Xu: http://lists.freebsd.org/pipermail/svn-src-user/2010-November/003668.html

I don't think the munmap/mmap cycle is strictly required for the race. The piece of shared memory might be put to a different use as well. This is uncommon but valid.

As also mentioned in that message, more "fun" occurs if threads with different privilege access a common robust mutex. Because the node for the list of owned robust mutexes is in the mutex itself, a thread with low privilege may corrupt a high privilege thread's list. This could be exploited easily to make the high privilege thread crash and in rare cases this might allow the high privilege thread's memory to be corrupted. Apparently Linux's robust mutexes are only designed for use by threads with the same privileges. This could have been avoided easily by making the robust list an array fully in the thread's memory instead of a linked list.

like image 6
jilles Avatar answered Oct 19 '22 07:10

jilles