Recently I discovered that for unknown reason std::this_thread::sleep_for
can sleep 10 times longer than intended.
It's not just one or few accidental delays, but very noticeable effect that can last for minutes and affects also std::condition_variable::wait_for
.
Here is my code:
#include <thread>
#include <iostream>
#include <chrono>
using namespace std;
int main(int argc, char const *argv[])
{
unsigned t = 0;
auto start = ::std::chrono::steady_clock::now();
for(unsigned i = 0; i < 100; ++i) {
::std::this_thread::sleep_for(chrono::milliseconds(1));
t ^= i; // prevent overeager optimization
}
auto stop = ::std::chrono::steady_clock::now();
auto elapsed = ::std::chrono::duration_cast<::std::chrono::milliseconds>(stop - start);
::std::cout << elapsed.count() << "\n";
::std::cerr << t << "\n";
return 0;
}
I compiled this with Visual Studio Command line tools as follows:
cl /EHsc /std:c++17 /O2 sleep_for.cpp
When I execute it, the result is sometimes close to 150 and sometimes to 1500. Between executions, nothing changes in the environment. What may be the cause of such effect?
UPDATE 1.
It looks like this is an operating system problem. Besides std::thread::sleep_for
and std::condition_variable::wait_until
, I also tried timers based on uvw library, with same result. Whatever I do, I can't make a thread sleep for less than 15 milliseconds. This effect is very noticeable after system startup, and than it appears more seldom.
SUMMARY.
Some folks suggest: read the docs! They warned you! std::this_thread::sleep_for
may be unstable! But on the other hand, according to the documentation, there is nothing wrong if it works more stable, one just needs to know how to achieve that.
Jeremy Friesner proposed a solution, I've tested it and it worked out, showing 10 times better performance and a good average stability. Whether to use std::this_thread::sleep_for
and how to do that, you deside.
I'd definitely not recommend using it if you are developing a life-saving equipment etc. But in many cases it may be pretty useful.
A good article that discusses the price of increased stability and resolution: https://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted/
This answer is created from Jeremy Friesner's comment. He suggests to try timeBeginPeriod(1)
at the beginning of the main()
. That worked out! I created a modification of the program, and it runs 10 times faster.
Here is the code:
#include <thread>
#include <iostream>
#include <chrono>
#include <windows.h>
using namespace std;
int main(int argc, char const *argv[])
{
timeBeginPeriod(1);
unsigned t = 0;
auto start = ::std::chrono::steady_clock::now();
for(unsigned i = 0; i < 100; ++i) {
::std::this_thread::sleep_for(chrono::milliseconds(1));
t ^= i; // prevent overeager optimization
}
auto stop = ::std::chrono::steady_clock::now();
auto elapsed = ::std::chrono::duration_cast<::std::chrono::milliseconds>(stop - start);
::std::cout << elapsed.count() << "\n";
::std::cerr << t << "\n";
return 0;
}
It can be compiled using the following command:
cl /EHsc /std:c++17 /O2 sleep_for.cpp winmm.lib
Thanks, Jeremy!
P.S. Increasing timer frequency has its price: https://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted/
Looking at the documentation:
Blocks the execution of the current thread for at least the specified sleep_duration. This function may block for longer than sleep_duration due to scheduling or resource contention delays.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With