Assume there are two threads running Thread1()
and Thread2()
respectively. The thread 1 just sets a global flag to tell thread 2 to quit and thread 2 periodically checks if it should quit.
volatile bool is_terminate = false; void Thread1() { is_terminate = true; } void Thread2() { while (!is_terminate) { // ... } }
I want to ask if the above code is safe assuming that access to is_terminate
is atomic. I already know many materials state that volatile
can not insure thread-safety generally. But in the situation that only one atomic variable is shared, do we really need to protect the shared variable using a lock?
Unlike synchronized methods or blocks, it does not make other threads wait while one thread is working on a critical section. Therefore, the volatile keyword does not provide thread safety when non-atomic operations or composite operations are performed on shared variables. Operations like increment and decrement are composite operations.
In most cases, errors in multithreaded applications are the result of incorrectly sharing state between several threads. Therefore, the first approach that we'll look at is to achieve thread-safety using stateless implementations.
In multithreaded environments, we need to write implementations in a thread-safe way. This means that different threads can access the same resources without exposing erroneous behavior or producing unpredictable results. This programming methodology is known as “thread-safety”.
In this article, we saw that declaring a shared variable as volatile will not always be thread-safe. We learned that to provide thread safety and avoid race conditions for non-atomic operations, using synchronized methods or blocks or atomic variables are both viable solutions.
It is probably sort of thread-safe.
Thread safety tends to depend on context. Updating a bool is always thread safe, if you never read from it. And if you do read from it, then the answer depends on when you read from it, and what that read signifies.
On some CPUs, but not all, writes to an object of type bool
will be atomic. x86 CPUs will generally make it atomic, but others might not. If the update isn't atomic, then adding volatile
won't help you.
But the next problem is reordering. The compiler (and CPU) will carry out reads/writes to volatile
variables in the order specified, without any reordering. So that's good.
But it makes no guarantee about reordering one volatile
memory access relative to all the non-volatile
ones. So a common example is that you define some kind of flag to protect access to a resource, you make the flag volatile
, and then the compiler moves the resource access up so it happens before you check the flag. It's allowed to do that, because it's not reordering the internal ordering of two volatile
accesses, but merely a volatile
and a non-volatile
one.
Honestly, the question I'd ask is why not just do it properly? It is possible that volatile
will work in this situation, but why not save yourself the trouble, and make it clearer that it's correct? Slap a memory barrier around it instead.
It is not thread safe.
If the threads, for example, are run on CPUs with separate caches there are no language rules saying that the caches are to be synchronized when writing a volatile variable. The other thread may not see the change for a very long time, if ever.
To answer in another way:
If volatile
is enough to be thread safe, why is C++0x adding an entire chapter with atomic operations?
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2047.html
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With