Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do timers and delays work on low level

I cannot really find anything interesting concerning this question, but I've been wondering for quite some time now how timers and delays in any programming language work at low level.

As far as I understand, a CPU continuously executes instructions in all of its cores, as fast as it can (dependent on its clock speed), and as long as there are any instructions that are to be executed (there is a running, active thread).

I don't feel that there is a straightforward way to manipulate this flow where real time is concerned. I then wonder how stuff like animations work, encountered in many, many situations:

  • In the Windows 7 OS, the start menu button gradually glows brighter when you move the mouse over it;
  • In flash, there is a timeline and all objects in the flash document are animated according to the FPS setting and the timeline;
  • jQuery supports various animations;
  • Delays in code execution...

Do computers (mainboards) have physical timers? Like a CPU has registers to do its operations and keep data in between calculations? I haven't found anything about that on the internet. Does the OS have some really complex programming that provides the lowest-level API for all things related to timing?

I'm really curious about the answer.

like image 678
MarioDS Avatar asked Nov 02 '12 17:11

MarioDS


1 Answers

Most (maybe ALL) CPUs are driven by a clock on the motherboard that "ticks" (generates a signal), every so often. This is what the Megahertz (MHZ) or Gigahertz (GHz) rating on the processor is telling you, what speed this clock runs at. This what "Overclocking" refers to, when you read that a processor can safely be overclocked up to some higher GHz setting. Most of what you describe above is triggered by the "ticks" generated from this clock. This governs how often the CPU attempts to execute the next instruction, how often it does everything in fact....

Do not confuse this clock with the Real-Time Clock, which keeps track of what time it is. All references to "system time" or "server time" use the real-time clock, which is a separate piece of hardware on your motherboard that keeps track of the time, even when the computer is turned off.

These two "clocks" are independent of one another and are used for two completely different purposes. One drives all CPU processing. If a specified process (say, multiplying two integers together) will take 127 cpu cycles, then how much real-time it will take is dependent totally on what Gigahertz the cpu clock is set to... If its set say to 3.0 Ghz, then that means the cpu can execute 3 billion processor cycles per second, so something that takes 127 cycles will take 127/3 billion seconds. If you put a different clock cpu on the motherboard, then the same multiplication will take more (or less) time. None of this has anything at all to do with the real-time clock which just keeps track of what time it is.

like image 182
Charles Bretana Avatar answered Oct 03 '22 11:10

Charles Bretana