Hi I am looking into having thread handover using a fast and reliable producer consumer queue. I am working on Windows with VC++.
I based my design on Anthony Williams queue, that is, basically a boost::mutex with a boost::condition_variable. Now typically the time between notify_one() and waking up varies between 10 (rare) and 100 microseconds, with most values in the area of 50 microseconds. However, about 1 in a 1000 goes over one millisecond, with some taking longer than 5 milliseconds.
I was just wondering whether these are typical values? Is there a faster way of signalling short of spinning? Is it from here all down to managing thread priorities? I haven't started playing with priorities, but I was just wondering whether there is a chance of getting this into a fairly stable region of about 10 microseconds?
Thanks
EDIT: With SetPriorityClass(GetCurrentProcess(),REALTIME_PRIORITY_CLASS), the average wake time is still roughly 50 micros, however there are a lot fewer outliers, most of them are around 150-200 micros now. Except for one freak outlier at 7 ms. Hmmm... not good.
One way amortise the overhead of locking and thread wakeup is to add a second queue and implement a double-buffering approach. This enables batch-processing at the consumer side:
template<typename F>
std::size_t consume_all(F&& f)
{
// minimize the scope of the lock
{
std::lock_guard<std::mutex> lock(the_mutex);
std::swap(the_queue, the_queue2);
}
// process all items from the_queue2 in batch
for (auto& item : the_queue2)
{
f(item);
}
auto result = the_queue2.size();
the_queue2.clear(); // clears the queue and preserves the memory. perfect!
return result;
}
Working sample code.
This does not fix the latency issue, but it can improve throughput. If a hiccup occurs then the consumer will be presented with a large batch which can then be processed at full speed without any locking overhead. This allows the consumer to quickly catch up with the producer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With