The following snippet of code:
struct timespec ts;
for (int x = 0; x < 100000000; x++) {
timespec_get(&ts, TIME_UTC);
long cTime = (long) time(NULL);
if (cTime != ts.tv_sec && ts.tv_nsec < 3000000) {
printf("cTime: %ld\n", cTime);
printf("ts.tv_sec: %ld\n", ts.tv_sec);
printf("ts.tv_nsec: %ld\n", ts.tv_nsec);
}
}
produces this output:
...
cTime: 1579268059
ts.tv_sec: 1579268060
ts.tv_nsec: 2527419
cTime: 1579268059
ts.tv_sec: 1579268060
ts.tv_nsec: 2534036
cTime: 1579268059
ts.tv_sec: 1579268060
ts.tv_nsec: 2540359
cTime: 1579268059
ts.tv_sec: 1579268060
ts.tv_nsec: 2547039
...
Why the discrepancy between cTime
and ts.tv_sec
? Note that the problem does not occur if the conditional is changed to ts.tv_nsec >= 3000000
. The problem relies on nanoseconds being smaller than 3000000.
The reason is, that you (implicitly) use different system clocks. timespec_get()
uses the high resolution system-wide realtime clock, while time()
uses the coarse realtime clock.
Try to use
clock_gettime(CLOCK_REALTIME_COARSE, &ts);
instead of your timespec_get()
, then the difference should vanish.
Edit:
This can be seen in the Linux Kernel Source, vclock_gettime.c
Indeed the issue is a bit subtle to see here. The seconds-part of the structure members used by CLOCK_REALTIME_COARSE
and CLOCK_REALTIME
contain identical values, but the nanoseconds-part is different; with CLOCK_REALTIME
it can be larger than 1000000000
(which is one second). In this case, it is fixed up on the call:
ts->tv_sec += __iter_div_u64_rem(ns, NSEC_PER_SEC, &ns);
ts->tv_nsec = ns;
This correction is neither performed with CLOCK_REALTIME_COARSE
, nor with time()
. This explains the difference between CLOCK_REALTIME
and time()
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With