Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why GetTickCount and timeGetTime have different resolution?

By default, GetTickCount and timeGetTime has the same resolution -- 15.625ms, but after I call timeBeginPeriod(1), GetTickCount still updates every 15.625 ms, while timeGetTime does update every 1ms, Why is this?

In Bug in waitable timers?, the author mentioned that:

RTC based timer

I am wondering that: Why GetTickCount and timeGetTime come from the same RTC, but there are two kinds of resolution?

thanks!

like image 543
ajaxhe Avatar asked Mar 13 '13 11:03

ajaxhe


2 Answers

I think the OP is getting confused between timers, interrupts, and timer ticks.

The quantum interval is the timer tick period. This is hardwired into the system at 18.2 ticks/sec. This never varies for any reason, and is not based on the system CPU clock (obviously!).

You can ask the system for 2 different things : the date and time, (GetTime) or the amount of time the system has been running (GetTickCount/GetTickCount64).

If you're interested in the uptime of the system, use GetTickCount. From my limited understanding, GetInterruptTime only returns the amount of time spent during real-time interrupts (as opposed to time spent running applications or idle).

I'm not sure that telling a new programmer to stop asking "why?" is going to help them. Yes, the OP hasn't seen or read the comments on the page mentioned; but asking here shouldn't be a privilege granted only to searchers who have exhausted all other avenues (possibly including the Seeing Stones of C). We ALL learn here. Stop telling people their question is pointless without telling them why. And there is no reason not to ask. Timers can be confusing!

like image 94
Cephas Atheos Avatar answered Sep 20 '22 13:09

Cephas Atheos


Actually the table you quote is false for QueryPerformanceCounter. QPC (for short) is implemented in terms of 3 possible timing sources, which are 1: HPET, 2: PM ACPI timer, 3: RDTSC. the decision is made by heuristics depending on conditions, kernel boot options, bugs in bios and bugs in ACPI code provided by the chipset. All of these bugs are discovered on a per piece of hardware basis in Microsoft labs. Linux and BSD programmers have to find by themselves the hardway and usually must rewrite ACPI code to workaround shits. The linux community have come to hate RDTSC as well as much as ACPI for different reasons. But anyway...

The timeReadTime is different than the GetTickCount because for stability according to how the documentation specified GetTickCount made that its behavior could not be changed. However windows needed to get a better Tick resolution in some cases to allow better Timer functions. (timer works with messages send to application GetMessage or PeekMessage function and then branch in the good callback to handle the timer) This is needed for multimedia like sound/audio sync.

Obviously, game or real time programming needs better precision even sometime and cannot use timers. Instead they use busy waiting, or they sleep at only one occasion : the VSync through a call to OpenGL or DirectX uppon backbuffer/frontbuffer swapping. The video driver will wake up the waiting thread uppon VSync signal from the screen itself. Which is an event based sleep, like a timer but not based on a timer interruption.

it should be noted that modern kernels have dynamic ticking (tickless kernel, from windows 8 or linux 2.6.18). The finest frequency of tick interruption cannot be brought under 1ms to avoid to choke, but there is no upper limit. if no application is running and posting timing event, then the machine may sleep indefinitely allowing the CPU to go down to the deepest sleep state (Intel Enhanced Speed Step C7 state). After which the next wake up event, most of the time, happens because of a device interruption, mostly USB. (mouse move or other stuff)

like image 24
v.oddou Avatar answered Sep 24 '22 13:09

v.oddou