Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Understanding c++11 memory fences

Tags:

c++

c++11

atomic

I'm trying to understand memory fences in c++11, I know there are better ways to do this, atomic variables and so on, but wondered if this usage was correct. I realize that this program doesn't do anything useful, I just wanted to make sure that the usage of the fence functions did what I thought they did.

Basically that the release ensures that any changes made in this thread before the fence are visible to other threads after the fence, and that in the second thread that any changes to the variables are visible in the thread immediately after the fence?

Is my understanding correct? Or have I missed the point entirely?

#include <iostream> #include <atomic> #include <thread>  int a;  void func1() {     for(int i = 0; i < 1000000; ++i)     {         a = i;         // Ensure that changes to a to this point are visible to other threads         atomic_thread_fence(std::memory_order_release);     } }  void func2() {     for(int i = 0; i < 1000000; ++i)     {         // Ensure that this thread's view of a is up to date         atomic_thread_fence(std::memory_order_acquire);         std::cout << a;     } }  int main() {     std::thread t1 (func1);     std::thread t2 (func2);      t1.join(); t2.join(); } 
like image 924
jcoder Avatar asked Nov 29 '12 18:11

jcoder


People also ask

How does memory fence work?

Memory fence is a type of barrier instruction that causes a CPU or compiler to enforce ordering constraint on memory operations issued before and after the memory fence instruction. This typically means that operations issued prior to the fence are guaranteed to performed before operations issued after the fence.

What is compiler fence?

1) A compiler fence (by itself, without a CPU fence) is only useful in two situations: To enforce memory order constraints between a single thread and asynchronous interrupt handler bound to that same thread (such as a signal handler).

What is memory model C++?

The memory model means that C++ code now has a standardized library to call regardless of who made the compiler and on what platform it's running. There's a standard way to control how different threads talk to the processor's memory.


2 Answers

Your usage does not actually ensure the things you mention in your comments. That is, your usage of fences does not ensure that your assignments to a are visible to other threads or that the value you read from a is 'up to date.' This is because, although you seem to have the basic idea of where fences should be used, your code does not actually meet the exact requirements for those fences to "synchronize".

Here's a different example that I think demonstrates correct usage better.

#include <iostream> #include <atomic> #include <thread>  std::atomic<bool> flag(false); int a;  void func1() {     a = 100;     atomic_thread_fence(std::memory_order_release);     flag.store(true, std::memory_order_relaxed); }  void func2() {     while(!flag.load(std::memory_order_relaxed))         ;      atomic_thread_fence(std::memory_order_acquire);     std::cout << a << '\n'; // guaranteed to print 100 }  int main() {     std::thread t1 (func1);     std::thread t2 (func2);      t1.join(); t2.join(); } 

The load and store on the atomic flag do not synchronize, because they both use the relaxed memory ordering. Without the fences this code would be a data race, because we're performing conflicting operations a non-atomic object in different threads, and without the fences and the synchronization they provide there would be no happens-before relationship between the conflicting operations on a.

However with the fences we do get synchronization because we've guaranteed that thread 2 will read the flag written by thread 1 (because we loop until we see that value), and since the atomic write happened after the release fence and the atomic read happens-before the acquire fence, the fences synchronize. (see § 29.8/2 for the specific requirements.)

This synchronization means anything that happens-before the release fence happens-before anything that happens-after the acquire fence. Therefore the non-atomic write to a happens-before the non-atomic read of a.

Things get trickier when you're writing a variable in a loop, because you might establish a happens-before relation for some particular iteration, but not other iterations, causing a data race.

std::atomic<int> f(0); int a;  void func1() {     for (int i = 0; i<1000000; ++i) {         a = i;         atomic_thread_fence(std::memory_order_release);         f.store(i, std::memory_order_relaxed);     } }  void func2() {     int prev_value = 0;     while (prev_value < 1000000) {         while (true) {             int new_val = f.load(std::memory_order_relaxed);             if (prev_val < new_val) {                 prev_val = new_val;                 break;             }         }          atomic_thread_fence(std::memory_order_acquire);         std::cout << a << '\n';     } } 

This code still causes the fences to synchronize but does not eliminate data races. For example if f.load() happens to return 10 then we know that a=1,a=2, ... a=10 have all happened-before that particular cout<<a, but we don't know that cout<<a happens-before a=11. Those are conflicting operations on different threads with no happens-before relation; a data race.

like image 80
bames53 Avatar answered Oct 01 '22 17:10

bames53


Your usage is correct, but insufficient to guarantee anything useful.

For example, the compiler is free to internally implement a = i; like this if it wants to:

 while(a != i)  {     ++a;     atomic_thread_fence(std::memory_order_release);  } 

So the other thread may see any values at all.

Of course, the compiler would never implement a simple assignment like that. However, there are cases where similarly perplexing behavior is actually an optimization, so it's a very bad idea to rely on ordinary code being implemented internally in any particular way. This is why we have things like atomic operations and fences only produce guaranteed results when used with such operations.

like image 35
David Schwartz Avatar answered Oct 01 '22 17:10

David Schwartz