Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does the clock work in Windows 7?

I have read this answer somewhere but I don't understand it exactly:

I understand Windows increments the clock every curTimeIncrement (156001 100 nanoseconds) with the value of curTimeAdjustment (156001 +- N). But when the clock is read using GetSystemTime does the routine interpolate within the 156001 nanosecond*100 interval to produce the precision indicated?

Can someone try to explain it to me?

What is curTimeIncrement, curTimeAdjustment and how can Windows do this?

What is the effect for this on getting the accurate time?

Is that true just for windows 7 or also other OS Win8, Linux, etcetera?

like image 313
Omega Avatar asked Mar 20 '15 08:03

Omega


1 Answers

It refers to the values returned by GetSystemTimeAdjustment() on Windows. It tells you how the clock is being adjusted to catch up or slow down to match the real time. "Real" being the time kept by an institution like the NIST in the USA, they have an atomic clock whose accuracy is far, far higher than the clock built into your machine.

The Real Time Clock (RTC) in your machine has limited accuracy, a side-effect of keeping the hardware affordable, it tends to be off by a few seconds each month. So periodically the operating system contacts a time server through the Internet, time.windows.com is the common selection on Windows. Which tells it the current real time according to the atom clock oracle.

The inaccuracy of the RTC is not the only source for drift, sometimes the real time is changed intentionally. Adding a leap second to resynchronize the clocks with the true rotation of the Earth. The current day (24 x 60 x 60 seconds) is a bit too short, the Earth's rotation is slowing down by ~1.5 msec every century and is in general irregular due to large storms and earth-quakes. The inserted leap second makes up for that. The most recent one was added on June 30th of this year at 23:59:60 UTC. 60 is not a typo :)

The one previous to that was on June 30th, 2012. A bit notorious, the insertion of the leap second crashed a lot of Linux servers. Google "Linux leap second bug" to learn more about it.

Which is in general what the machinery underneath GetSystemTimeAdjustment() is trying to avoid, instantly changing the time with the value obtained from the time server is very dangerous. Software often has a hard assumption that time progresses steadily and misbehaves when it doesn't. Like observing the same time twice when the clock is set back. Or observing a fake time like 23:59:60 UTC due to the leap second insertion.

So it doesn't, the clock is updated 64 times per second, at the clock tick interrupt. Or in other words 1 / 64 = 0.015625 between ticks, 156250 in nanoseconds*100 units. If a clock adjustment needs to be made then it doesn't just add 156250 but slightly more or less. Thus slowly resynchronizing the clock to the true time and avoiding upsetting software.

This of course has an unpleasant side-effect on software that paints a clock. An obvious way to do it is to use a one second timer. But sometimes that is not a second, it won't be when a time adjustment is in progress. Then Nyquist's sampling theorem comes into play, sometimes a timer tick does not update the clock at all or it skips a second. Notable is that this is not the only reason why it is hard to keep a painted clock accurate, the timer notification itself is always delayed as well. A side-effect of software not being able to instantly execute. This is in fact the much more likely source of trouble, the clock adjustment is just icing on the cake that's easier to understand.

Awkward problem, Mr. Nyquist has taught us that you have to sample more frequently to eliminate undesirable aliasing effects. So a workaround is to just set the timer at a small interval, like 15 or 31 milliseconds, short enough for the user to no longer observe the missing update.

like image 59
Hans Passant Avatar answered Sep 21 '22 17:09

Hans Passant