Do I need to synchronize std::condition_variable/condition_variable_any::notify_one
?
As far as I can see, if lost of notifications is acceptable - it is OK to call notify_one
not protected (by mutex for example).
For instance, I saw following usage patterns (sorry, don't remember where):
{
{
lock_guard<mutex> l(m);
// do work
}
c.notify_one();
}
But, I inspected libstdc++ sources, and I see:
condition_variable::notify_one
void condition_variable::notify_one() noexcept
{
int __e = __gthread_cond_signal(&_M_cond);
// XXX not in spec
// EINVAL
if (__e)
__throw_system_error(__e);
}
and condition_variable_any::notify_one:
void condition_variable_any::notify_one() noexcept
{
lock_guard<mutex> __lock(_M_mutex);
_M_cond.notify_one();
}
And here is layout of condition_variable_any:
class condition_variable_any
{
condition_variable _M_cond;
mutex _M_mutex;
// data end
I.e. it is just thin wrapper around condition_variable+mutex.
So, questions:
notify_one
by mutex for either condition_variable_any
or condition_variable
?condition_variable_any::notify_one
and condition_variable::notify_one
differs? Maybe condition_variable::notify_one
requires manual protection but condition_variable_any::notify_one
doesn't? Is it libstdc++ bug?std::condition_variable The condition_variable class is a synchronization primitive that can be used to block a thread, or multiple threads at the same time, until another thread both modifies a shared variable (the condition), and notifies the condition_variable .
notify_one : Signal a condition so that one of the thread waiting on this condition can resume. Thread A is supposed to finish whatever it is doing and then wake thread B to do its job. notify_one is the right choice here where one thread waits on the condition while the other thread can signal it.
Show activity on this post. If there are ten threads blocked on the condition variable, for example, notify_one() will unblock only one thread, while notify_all() will unblock them all. In your case, you'll want to use notify_one() so you don't wake up threads that don't have any work waiting for them.
You need condition variables, to be used with a mutex (each cond. var. belongs to a mutex) to signal changing states (conditions) from one thread to another one. The idea is that a thread can wait till some condition becomes true.
I.e. it is just thin wrapper around condition_variable+mutex.
Er, no. Just because it has members of those types doesn't make it a thin wrapper. Try to understand what it actually does, not just the types of its private members. There's some quite subtle code there.
- Is it thread-safe to not protect notify_one by mutex for either condition_variable_any or condition_variable?
Yes.
In fact, calling notify_one()
with the mutex locked will cause waiting threads to wake up, attempt to lock the mutex, find it is still locked by the notifying thread, and go back to sleep until the mutex is released.
If you call notify_one()
without the mutex locked then the waking threads can run immediately.
2 Why implementation of condition_variable_any uses additional mutex?
condition_variable_any
can be used with any Lockable type, not just std:mutex
, but internally the one in libstdc++ uses a condition_variable
, which can only be used with std::mutex
, so it has an internal std::mutex
object too.
So the condition_variable_any
works with two mutexes, the external one supplied by the user and the internal one used by the implementation.
3 Why implementation of condition_variable_any::notify_one and condition_variable::notify_one differs? Maybe condition_variable::notify_one requires manual protection but condition_variable_any::notify_one doesn't? Is it libstdc++ bug?
No, it's not a bug.
The standard requires that calling wait(mx)
must atomically unlock mx
and sleep. libstdc++ uses the internal mutex to provide that guarantee of atomicity. The internal mutex must be locked to avoid missed notifications if other threads are just about to wait on the condition_variable_any
.
(1) I don't see any reason that signalling a condition variable has to be guarded by a mutex, from a data-race stand-point. Obviously, you have the possibility of receiving redundant notifications or losing notifications, but if this is an acceptable or recoverable error condition for your program, I don't believe there's anything in the standard that will make it illegal. The standard, of course, won't guard you against race conditions; it's the programmer's responsibility to make sure that race conditions are benign. (And, of course, it is essential that the programmer not put any "data races", which are defined very specifically in the standard but don't apply directly to synchronization primitives, or undefined behavior is summoned.)
(2) I can't answer a question like this about the internal implementation of a standard library facility. It is, of course, the vendor's responsibility to provide library facilities that work correctly and meet the specification. This library's implementation may have some internal state that requires mutual exclusion to avoid corruption, or it may perform locking in order to avoid lost or redundant notifications. (Just because your program may tolerate them, doesn't mean arbitrary users of the library can, and in general I expect they can't.) It would just be speculation on my part what they're guarding with this mutex.
(3) condition_variable_any
is made to work on any lock-like object, while condition_variable
is designed specifically to work with unique_lock<mutex>
. The latter is probably easier to implement and/or more performant than the former, since it knows specifically which types it is operating on and what they require (whether they're trivial, whether they fit in a cache line, whether they map directly to a specific platform's set of syscalls, what fences or cache coherence guarantees they imply, etc.), while the former provides a generic facility for operating on lock-ish objects without being stuck specifically with the constraints of std::mutex
or std::unique_lock<>
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With