I've been hearing so many conflicting answers, and now I don't know what to think. The agreed-upon knowledge is that for sharing memory in a thread safe manner in C++, it's required to use volatile together with std::mutex.
Based on that understanding, I've been writing code like this:
volatile bool ready = false;
std::condition_variable cv;
std::mutex mtx;
std::unique_lock<std::mutex> lckr{ mtx };
cv.wait(lckr, [&ready]() -> bool { return ready; });
But then I saw a lecture of Chandler Carruth in CppCon where he said (as a side note) that volatile is not required in this situation, and that I should basically never use volatile.
I then saw other answers in Stack Overflow that say that volatile should never be used, and it's not good enough and it doesn't guarantee atomicity at all.
Is Chandler Carruth correct? Are we both wrong?
Now I have 3 options:
I want to know if I'm allowed by the C++14 ISO standard to write code like this:
#include <condition_variable>
#include <mutex>
#include <iostream>
#include <future>
#include <functional>
struct sync_t
{
std::condition_variable cv;
std::mutex mtx;
bool ready{ false };
};
static void threaded_func(sync_t& sync)
{
std::lock_guard<std::mutex> lckr{ sync.mtx };
sync.ready = true;
std::cout << "Waking up main thread" << std::endl;
sync.cv.notify_one();
}
int main()
{
sync_t sync;
{
std::unique_lock<std::mutex> lckr{ sync.mtx };
sync.ready = false;
std::future<void> thread =
std::async(std::launch::async, threaded_func, std::ref(sync));
std::cout << "Preparing to sleep" << std::endl;
sync.cv.wait(lckr, [&sync]() -> bool { return sync.ready; });
thread.get();
}
std::cout << "Done program execution" << std::endl;
return 0;
}
and what happens when I make it:
volatile bool ready{ false };
and what happens when I make it:
std::atomic<bool> ready{ false };
The volatile
qualifier has no required effect on access to an object from different threads – it only guarantees that no side-effects of modification in a single thread will be optimized-out by the compiler. From cppreference (bold emphasis mine):
- volatile object - an object whose type is volatile-qualified, or a subobject of a volatile object, or a mutable subobject of a const-volatile object. Every access (read or write operation, member function call, etc.) made through a glvalue expression of volatile-qualified type is treated as a visible side-effect for the purposes of optimization (that is, within a single thread of execution, volatile accesses cannot be optimized out or reordered with another visible side effect that is sequenced-before or sequenced-after the volatile access. This makes volatile objects suitable for communication with a signal handler, but not with another thread of execution, see std::memory_order). Any attempt to refer to a volatile object through a glvalue of non-volatile type (e.g. through a reference or pointer to non-volatile type) results in undefined behavior.
To prevent undefined behaviour when accessing an object from multiple threads, you should use a std::atomic
object. Again, from cppreference:
Each instantiation and full specialization of the std::atomic template defines an atomic type. If one thread writes to an atomic object while another thread reads from it, the behavior is well-defined (see memory model for details on data races).
In addition, accesses to atomic objects may establish inter-thread synchronization and order non-atomic memory accesses as specified by std::memory_order.
No, volatile is confusing keyword but it has nothing to do with concurrency unlike in C# or Java where it guarantees sequential consistency. Here its just a hint to the compiler not to optimise the variable.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With