Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is gettimeofday() guaranteed to be of microsecond resolution?

I am porting a game, that was originally written for the Win32 API, to Linux (well, porting the OS X port of the Win32 port to Linux).

I have implemented QueryPerformanceCounter by giving the uSeconds since the process start up:

BOOL QueryPerformanceCounter(LARGE_INTEGER* performanceCount) {     gettimeofday(&currentTimeVal, NULL);     performanceCount->QuadPart = (currentTimeVal.tv_sec - startTimeVal.tv_sec);     performanceCount->QuadPart *= (1000 * 1000);     performanceCount->QuadPart += (currentTimeVal.tv_usec - startTimeVal.tv_usec);      return true; } 

This, coupled with QueryPerformanceFrequency() giving a constant 1000000 as the frequency, works well on my machine, giving me a 64-bit variable that contains uSeconds since the program's start-up.

So is this portable? I don't want to discover it works differently if the kernel was compiled in a certain way or anything like that. I am fine with it being non-portable to something other than Linux, however.

like image 372
Bernard Avatar asked Aug 01 '08 14:08

Bernard


People also ask

How precise is gettimeofday?

It seems like the precision of the gettimeofday() function is exactly one microsecond.

What does clock_ gettime do?

The clock_gettime() function gets the current time of the clock specified by clock_id, and puts it into the buffer pointed to by tp. The only supported clock ID is CLOCK_REALTIME. The tp parameter points to a structure containing at least the following members: time_t tv_sec.

What is Clock_monotonic C?

CLOCK_MONOTONIC. Represents monotonic time since some unspecified starting point. This clock cannot be set. CLOCK_MONOTONIC_RAW (Linux-specific) Similar to CLOCK_MONOTONIC, but provides access to a raw hardware-based time that is not subject to NTP adjustments.


2 Answers

Maybe. But you have bigger problems. gettimeofday() can result in incorrect timings if there are processes on your system that change the timer (ie, ntpd). On a "normal" linux, though, I believe the resolution of gettimeofday() is 10us. It can jump forward and backward and time, consequently, based on the processes running on your system. This effectively makes the answer to your question no.

You should look into clock_gettime(CLOCK_MONOTONIC) for timing intervals. It suffers from several less issues due to things like multi-core systems and external clock settings.

Also, look into the clock_getres() function.

like image 156
Louis Brandy Avatar answered Sep 21 '22 21:09

Louis Brandy


High Resolution, Low Overhead Timing for Intel Processors

If you're on Intel hardware, here's how to read the CPU real-time instruction counter. It will tell you the number of CPU cycles executed since the processor was booted. This is probably the finest-grained counter you can get for performance measurement.

Note that this is the number of CPU cycles. On linux you can get the CPU speed from /proc/cpuinfo and divide to get the number of seconds. Converting this to a double is quite handy.

When I run this on my box, I get

11867927879484732 11867927879692217 it took this long to call printf: 207485 

Here's the Intel developer's guide that gives tons of detail.

#include <stdio.h> #include <stdint.h>  inline uint64_t rdtsc() {     uint32_t lo, hi;     __asm__ __volatile__ (       "xorl %%eax, %%eax\n"       "cpuid\n"       "rdtsc\n"       : "=a" (lo), "=d" (hi)       :       : "%ebx", "%ecx");     return (uint64_t)hi << 32 | lo; }  main() {     unsigned long long x;     unsigned long long y;     x = rdtsc();     printf("%lld\n",x);     y = rdtsc();     printf("%lld\n",y);     printf("it took this long to call printf: %lld\n",y-x); } 
like image 42
Mark Harrison Avatar answered Sep 24 '22 21:09

Mark Harrison