I'm trying to implement some cross-platform code in C++11. Part of this code implements a semaphore object using a std::condition_variable. When I need to do a timed wait on the semaphore, I use wait_until or wait_for.
The problem I'm experiencing is that it seems like the standard implementation of condition_variable on POSIX-based systems relies on the system clock, rather than the monotonic clock (see also: this issue against the POSIX spec)
That means that if the system clock gets changed to some time in the past, my condition variable will block for far longer than I expect it to. For instance, if I want my condition_variable to time out after 1 second, if someone adjusts the clock back 10 minutes during the wait, the condition_variable blocks for 10 minutes + 1 second. I've confirmed that this is the behavior on an Ubuntu 14.04 LTS system.
I need to rely on this timeout to be at least somewhat accurate (ie, It can be inaccurate within some margin of error, but still needs to execute if the system clock changes). It seems like what I'm going to need to do is write my own version of condition_variable that uses the POSIX functions and implements the same interface using the monotonic clock.
That sounds like A Lot Of Work - and kind of a mess. Is there some other way of working around this issue?
I encountered the same problem. A colleague of mine gave me a tip to use certain C functions from <pthread.h>
instead and it worked out wonderfully for me.
As an example, I had:
std::mutex m_dataAccessMutex;
std::condition_variable m_dataAvailableCondition;
with its standard usage:
std::unique_lock<std::mutex> _(m_dataAccessMutex);
// ...
m_dataAvailableCondition.notify_all();
// ...
m_dataAvailableCondition.wait_for(...);
The above can be replaced by using pthread_mutex_t
and pthread_cond_t
. The advantage is that you can specify the clock to be monotonic. Brief usage example:
#include <pthread.h>
// Declare the necessary variables
pthread_mutex_t m_mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_condattr_t m_attr;
pthread_cond_t m_cond;
// Set clock to monotonic
pthread_condattr_init(&m_attr);
pthread_condattr_setclock(&m_attr, CLOCK_MONOTONIC);
pthread_cond_init(&m_cond, &m_attr);
// Wait on data
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC, &ts);
ts.tv_sec += timout_in_seconds;
pthread_mutex_lock(&m_mutex);
int rc = pthread_cond_timedwait(&m_cond, &m_mutex, &ts);
if (rc != ETIMEDOUT)
; // do things with data
else
; // error: timeout
// ...
pthread_mutex_unlock(&m_mutex); // have to do it manually to unlock
// ...
// Notify the data is ready
pthread_cond_broadcast(&m_cond);
Register your active condition variables centrally.
Do some effort to detect the clock error, even if it is a thread spin-locking on the current clock (ick) or some other means.
When you detect a clock error, poke the condition variable.
Now wrap your condition variables in a thin wrapper that also supports detecting the clock slippage. It invokes wait_until
but replaces the predicate with one that detects clock slippage, and when that happens breaks out of the wait.
When your implementation is broken, you gotta do what you gotta do.
After considering the possible solutions to this problem, the one that seems to make the most sense is to ban the use of std::condition_variable (or at least make the caveat clear that it is always going to use the system clock). Then I have to basically re-implement the standard library's condition_variable myself, in a way that respects the clock choice.
Since I have to support multiple platforms (Bionic, POSIX, Windows, and eventually MacOS), that means I'm going to maintain several versions of this code.
While this is nasty, it seems like the alternatives are even nastier.
This may not be the best solution, or a great solution, but you did say "work around" and not "a lot of work", so:
std::condition_variable
sleep - respectively)Not very elegant, and there's a bunch of overhead involved - but this does make sense, and isn't some crazy hack.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With