Let me ask my question by this test program:
#include <iostream>
#include <chrono>
using std::chrono::nanoseconds;
using std::chrono::duration_cast;
int main(int argc, char* argv[])
{
std::cout
<< "Resolution (nano) = "
<< (double) std::chrono::high_resolution_clock::period::num /
std::chrono::high_resolution_clock::period::den *
1000 * 1000 * 1000
<< std::endl;
auto t1 = std::chrono::high_resolution_clock::now();
std::cout << "How many nanoseconds does std::cout take?" << std::endl;
auto t2 = std::chrono::high_resolution_clock::now();
auto diff = t2-t1;
nanoseconds ns = duration_cast<nanoseconds>(diff);
std::cout << "std::cout takes " << ns.count() << " nanoseconds"
<< std::endl;
return 0;
}
Output on my machine:
Resolution (nano) = 100
How many nanoseconds does std::cout take?
std::cout takes 1000200 nanoseconds
I receive either 1000200
or 1000300
or 1000400
or 1000500
or 1000600
or 2000600
as a result (= 1 or 2 microsecond). Obviously, either the resolution of std::chrono
is not 100 nano-seconds or the way I measure the time of std::cout
is wrong. (Why do I never receive something between 1 and 2 microseconds, for example 1500000
?)
I need a high-resolution timer in C++. The OS itself provides a high-resolution timer, because I'm able to measure things with microsecond-precision using the C# Stopwatch
class on the same machine. So I would just need to correctly use the high-resolution timer that the OS has!
How do I fix my program to produce the expected results?
Short answer: Not accurate in microseconds and below. As it's currently written, it's hard to understand your solution.
Holds true . A high_resolution_clock is steady.
I'm going to guess you are using Visual Studio 2012. If not, disregard this answer. Visual Studio 2012 typedef
's high_resolution_clock
to system_clock
. Sadly, this means it has crappy precision (around 1 ms). I wrote a better high-resolution clock which uses QueryPerformanceCounter
for use in Visual Studio 2012...
HighResClock.h:
struct HighResClock
{
typedef long long rep;
typedef std::nano period;
typedef std::chrono::duration<rep, period> duration;
typedef std::chrono::time_point<HighResClock> time_point;
static const bool is_steady = true;
static time_point now();
};
HighResClock.cpp:
namespace
{
const long long g_Frequency = []() -> long long
{
LARGE_INTEGER frequency;
QueryPerformanceFrequency(&frequency);
return frequency.QuadPart;
}();
}
HighResClock::time_point HighResClock::now()
{
LARGE_INTEGER count;
QueryPerformanceCounter(&count);
return time_point(duration(count.QuadPart * static_cast<rep>(period::den) / g_Frequency));
}
(I left out an assert and #ifs to see if it's being compiled on Visual Studio 2012 from the above code.)
You can use this clock anywhere and in the same way as standard clocks.
The resolution of a clock is not necessarily the same as the smallest duration that can be represented by the data type the clock uses. In this case your implementation uses a data type which can represent a duration as small as 100 nanoseconds, but the underlying clock doesn't actually have such a resolution.
The low resolution of Visual Studio's high_resolution_clock
has been an issue for several years. Microsoft's C++ standard library maintainer, Stephan T. Lavavej, has indicated that this has been fixed in Visual Studio 2015 via the use of QueryPerformanceCounter()
.
Maybe the implementation doesn't implement the higher resolution timer?
It seems you are using Windows (you mention C#), so if you use a timer and you are indeed using Windows, you can use QueryPerformanceFrequency and QueryPerformanceCounter.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With