Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Computing time on Linux: granularity and precision

Tags:

c

linux

time

**********************Original edit**********************


I am using different kind of clocks to get the time on Linux systems:

rdtsc, gettimeofday, clock_gettime

and already read various questions like these:

  • What's the best timing resolution can i get on Linux

  • How is the microsecond time of linux gettimeofday() obtained and what is its accuracy?

  • How do I measure a time interval in C?

  • faster equivalent of gettimeofday

  • Granularity in time function

  • Why is clock_gettime so erratic?

But I am a little confused:


What is the difference between granularity, resolution, precision, and accuracy?


Granularity (or resolution or precision) and accuracy are not the same things (if I am right ...)

For example, while using the "clock_gettime" the precision is 10 ms as I get with:

struct timespec res;
clock_getres(CLOCK_REALTIME, &res):

and the granularity (which is defined as ticks per second) is 100 Hz (or 10 ms), as I get when executing:

 long ticks_per_sec = sysconf(_SC_CLK_TCK);

Accuracy is in nanosecond, as the above code suggest:

struct timespec gettime_now;

clock_gettime(CLOCK_REALTIME, &gettime_now);
time_difference = gettime_now.tv_nsec - start_time;

In the link below, I saw that this is the Linux global definition of granularity and it's better not to change it:

http://wwwagss.informatik.uni-kl.de/Projekte/Squirrel/da/node5.html#fig:clock:hw

So my question is If this remarks above were right, and also:

a) Can we see what is the granularity of rdtsc and gettimeofday (with a command)?

b) Can we change them (with any way)?


**********************Edit number 2**********************

I have tested some new clocks and I will like to share information:

a) In the page below, David Terei, did a fine program that compares various clock and their performances:

https://github.com/dterei/Scraps/tree/master/c/time

b) I have also tested omp_get_wtime as Raxman suggested by and I found a precision in nsec, but not really better than "clock_gettime (as they did in this website):

http://msdn.microsoft.com/en-us/library/t3282fe5.aspx

I think it's a Windows-oriented time function.

Better results are given with clock_gettime using CLOCK_MONOTONIC than when using CLOCK_REALTIME. That's normal, because the first calculates PROCESSING time and the other REAL TIME respectively

c) I have found also the Intel function ippGetCpuClocks, but not I've not tested it because it's mandatory to register first:

http://software.intel.com/en-us/articles/ipp-downloads-registration-and-licensing/

... or you may use a trial version

like image 627
user2307229 Avatar asked May 24 '13 16:05

user2307229


People also ask

What is granularity of timer?

Granularity or resolution are about the smallest time interval that the timer can measure. For example, if you have 1 ms granularity, there's little point reporting the result with nanosecond precision, since it cannot possibly be accurate to that level of precision.

How accurate is Linux clock?

Linux Time Overview It is horribly inaccurate, but rumor has it that it is usually off in a predictable way i.e. a constant skew. The other clock is the virtual System Clock. Linux asks the Hardware Clock chip what time it is on power up and then keeps track of the time itself with software.

How long is a Linux jiffy?

Jiffy values for various Linux versions and platforms have typically varied between about 1 ms and 10 ms, with 10 ms reported as an increasingly common standard in the Jargon File.

What is timer Linux?

Timers are used to schedule execution of a function (a timer handler) at a particular time in the future. They thus work differently from task queues and tasklets in that you can specify when in the future your function will be called, whereas you can't tell exactly when a queued task will be executed.


1 Answers

  • Precision is the amount of information, i.e. the number of significant digits you report. (E.g. I am 2 m, 1.8 m, 1.83 m, and 1.8322 m tall. All those measurements are accurate, but increasingly precise.)

  • Accuracy is the relation between the reported information and the truth. (E.g. "I'm 1.70 m tall" is more precise than "1.8 m", but not actually accurate.)

  • Granularity or resolution are about the smallest time interval that the timer can measure. For example, if you have 1 ms granularity, there's little point reporting the result with nanosecond precision, since it cannot possibly be accurate to that level of precision.

On Linux, the available timers with increasing granularity are:

  • clock() from <time.h> (20 ms or 10 ms resolution?)

  • gettimeofday() from Posix <sys/time.h> (microseconds)

  • clock_gettime() on Posix (nanoseconds?)

In C++, the <chrono> header offers a certain amount of abstraction around this, and std::high_resolution_clock attempts to give you the best possible clock.

like image 145
Kerrek SB Avatar answered Sep 25 '22 20:09

Kerrek SB