Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

compiler memory barrier and mutex

posix standard says that things like mutex will enforce a memory sync. However, the compiler may reorder the memory access. Say we have

lock(mutex);
setdata(0);
ready = 1;
unlock(mutex);

It might be changed to code below by compiler reordering, right?

ready = 1;
lock(mutex);
setdata(0);
unlock(mutex);

So how can mutex sync the memory access? To be more precise, how do compilers know that reordering should not happen across lock/unlock?

actually here for single thread aspect, ready assignment reorder is totally safe since ready is not used in function call lock(mutex).

EDITED: So if function call is something that compiler will not get across, can we regard it as a compiler memory barrier like

asm volatile("" ::: "memory")
like image 225
user1192878 Avatar asked Jan 21 '13 11:01

user1192878


People also ask

Is mutex a memory barrier?

So locking a mutex and immediately unlocking it acts as a memory barrier, albeit a horribly inefficient one since it forces serial execution. Show activity on this post. Mutex and other lock in kernel uses the barrier internally to ensure that code runs in the exact order as expected.

What is a compiler barrier?

In computing, a memory barrier, also known as a membar, memory fence or fence instruction, is a type of barrier instruction that causes a central processing unit (CPU) or compiler to enforce an ordering constraint on memory operations issued before and after the barrier instruction.

What is mutex in memory?

Mutex is an abbreviation for mutual exclusion, as in, a mutex allows only one thread to access some data at any given time. To access the data in a mutex, a thread must first signal that it wants access by asking to acquire the mutex's lock.

What is the purpose of mutex?

In computer programming, a mutex (mutual exclusion object) is a program object that is created so that multiple program thread can take turns sharing the same resource, such as access to a file.


2 Answers

General answer is that your compiler should support POSIX if you want to use it for POSIX targets, and that support means it should know to avoid reordering across lock and unlock.

That said, this kind of knowledge is commonly achieved in a trivial way: compiler would not reorder access to (non-provably-local) data across a call to an external function which may use or modify them. It should have known something special about lock and unlock to be able to reorder.

And no, it's not that simple as "a call to global function is always a compiler barrier" -- we should add "unless the compiler knows something specific about that function". It does really happen: e.g. pthread_self on Linux (NPTL) is declared with __const__ attribute, allowing gcc to reorder across pthread_self() calls, even eliminating unnecessary calls altogether.

We can easily imagine a compiler supporting function attributes for acquire/release semantics, making lock and unlock less than a full compiler barrier.

like image 73
Anton Kovalenko Avatar answered Oct 04 '22 17:10

Anton Kovalenko


Compilers will not reorder things where it is not clear that it is safe. In your "what if" example, you are not proposing a reordered memory access, you're asking what if the compiler totally changes the code ordering -- and it won't. Something the compiler might do is change the order of actual memory reads/writes but not function calls (with or without respect to those memory accesses).

An example of where the compiler might reorder memory access... lets say you have this code:

a = *pAddressA;
b = *pAddressB;

and lets consider the case where the value of pAddressB is in a register while pAddressA is not. It's fair game for the compiler to read address B first, then move the value of pAddressA into that same register so that the new location can be received. If there happens to be a function call between these accesses, the compiler cannot do this.

like image 29
mah Avatar answered Oct 04 '22 17:10

mah