I would like to obtain the time with millisecond precision using Boost. (The accuracy does not need to be millisecond, just close.)
Referring to Local time with milliseconds, and others, it is indicated that the microsecond clock should be used:
boost::posix_time::microsec_clock::local_time();
In my experience, it is impossible to obtain time to a precision of microseconds (assuming some similar accuracy) with the standard, low-impact system calls (i.e., ::GetTicks()
on Windows). Rather, CPU-intensive calls need to be issued to improve the accuracy beyond milliseconds (into microseconds).
As I mentioned, I don't need microsecond precision - just somewhat close to millisecond precision. However, Boost.Date_Time does not provide any "millisec_clock" - it provides a second_clock
, and the next gradation is microsec_clock
, with no "millisec_clock" in between.
If I use the microsec_clock
, as noted, to obtain MILLIseconds, will I be hit with a CPU-intensive call?
I did a helper object used to measure time spent inside a function, using boost::date_time objects and especifically I used microsec_clock::local_time().
I was using this object to measure a couple million quick calls to different functions (a stress test case) and suddently I started to note that I could not account for a LOT of the execution time of my process. After some experiments i removed most of these counters and the total execution time of my code went from ~23 mins to about ~12 mins (about 50% !!)
So, to answer your question from my experience, microsec_clock::local_time() IS EXPENSIVE.
After seeing this I did a test using microsec_clock::universal_time() instead of microsec_clock::local_time() and it was definitely an improvement for my run time. It still added about 3 minutes, but it is better than 10 minutes :P. Thinking about it, i guess the problem is that local_time() offsets the time value to account for the time zone, which in my case was not needed (as i only needed differences in time). I still have to make a test to check if other methods are faster (such as clock_gettime).
I hope this is the type of answer you were looking for.
According to the Relevant documentation:
On most Win32 platforms it is implemented using ftime. Win32 systems often do not achieve microsecond resolution via this API. If higher resolution is critical to your application test your platform to see the achieved resolution.
ftime doesn't appear to be an overly heavy function(here is a question about how it works), but I guess it depends on your idea of CPU-intensive.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With