Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Concept of clock tick and clock cycles

Tags:

c

clock

I have written a very small code to measure the time taken by my multiplication algorithm :

  clock_t begin, end;
  float time_spent;

begin = clock();
a = b*c;
end = clock();
time_spent = (float)(end - begin)/CLOCKS_PER_SEC;

I am working with mingw under Windows.

I am guessing that end = clock() will give me the clock ticks at that particular moment. Subtracting it from begin will give me clock ticks consumed by multiplication. When I divide with CLOCKS_PER_SEC, I will get the total amount of time.

My first question is: Is there a difference between clock ticks and clock cycle?

My algorithm here is so small that the difference end-begin is 0. Does this mean that my code execution time was less than 1 tick and that's why I am getting zero?

like image 217
silver surfer Avatar asked Jan 10 '23 20:01

silver surfer


2 Answers

My first question is: Is there a difference between clock ticks and clock cycle?

Yes. A clock tick could be 1 millisecond or microsecond while the clock cycle could be 0.3 nanoseconds. On POSIX systems CLOCKS_PER_SEC must be defined as 1000000 (1 million). Note that if the CPU measurement cannot be obtained with microsecond resolution then the smallest jump in the return value from clock() will be larger than one.

My algorithm here is so small that the difference end-begin is 0. Does this mean that my code execution time was less than 1 tick and that's why I am getting zero?

Yes. To get a better reading I suggest that you loop enough iterations so that you measure over several seconds.

like image 114
Klas Lindbäck Avatar answered Jan 19 '23 18:01

Klas Lindbäck


Answering the difference between clock tick and clock cycle from a systems perspective

Every processor is accompanied by a physical clock (usually quartz crystal clock), which oscillates at certain frequency (vibrations/sec). The processor keeps track of time by the help of interrupts generated from the physical clock, which interrupts the processor at every time period T. This interrupt is called a 'clock tick'. CPU counts the number of interrupts it has seen since the system has started, and returns that value when you call clock(). By taking a difference between two clock ticks values (obtained from clock()), you would get how many interrupts that were seen between those two time points.

Most of the modern operating systems program the T value to be 1 microsecond i.e. the physical clock interrupts at every 1 microsecond, this is the lowest clock granularity which is widely supported by most of the physical clocks. With 1 microsecond as T, the clock cycle can be calculated as 1000000 per second. So, with this information, you can calculate the time elapsed from the difference of two clock ticks values i.e. diff between two ticks * tick period

NOTE: clock cycle defined by the OS has to be <= vibrations/sec on the physical clock, otherwise there will be a loss of precision

like image 22
Vamshi Avatar answered Jan 19 '23 19:01

Vamshi