Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

C: Different implementation of clock() in Windows and other OS?

I had to write a very simple console program for university that had to measure the time required to make an input.

Therefor I used clock() in front and after an fgets()call. When running on my Windows computer it worked perfectly. However when running on my friends Mac-Book and Linux-PC it gave extremely small results (a few micro seconds of time only).

I tried the following code on all 3 OS:

#include <stdio.h>
#include <time.h>
#include <unistd.h>

void main()
{
    clock_t t;

    printf("Sleeping for a bit\n");

    t = clock();

    // Alternatively some fgets(...)
    usleep(999999);

    t = clock() - t;

    printf("Processor time spent: %lf", ((double)t) / CLOCKS_PER_SEC);
}

On windows the output shows 1 second (or the amount of time you took to type when using fgets), on the other two OS not much more than 0 seconds.

Now my question is why there is such a difference in implementation of clock() on these OS. For windows it seems like the clock keeps ticking while the thread is sleeping/waiting but for Linux and Mac isn't?

Edit: Thank you for the answers so far guys, so it's just Microsoft's faulty implementation really.

Could anyone please answer my last question:

Also is there a way to measure what I wanted do measure on all 3 systems using C-standard libraries since clock() only seems to work this way on Windows?

like image 992
Maximilian Schier Avatar asked Nov 27 '14 21:11

Maximilian Schier


People also ask

How does clock () work in C?

clock() function in C/C++ The clock() function returns the approximate processor time that is consumed by the program. The clock() time depends upon how the operating system allocate resources to the process that's why clock() time may be slower or faster than the actual clock. Syntax: clock_t clock( void );

Is clock () in C accurate?

It has nanosecond precision.

What are the functions of a clock?

A clock or a timepiece is a device used to measure and indicate time. The clock is one of the oldest human inventions, meeting the need to measure intervals of time shorter than the natural units such as the day, the lunar month and the year.

What does clock_t return?

Description. The C library function clock_t clock(void) returns the number of clock ticks elapsed since the program was launched. To get the number of seconds used by the CPU, you will need to divide by CLOCKS_PER_SEC.


2 Answers

You're encountering a known bug in Microsoft's C Runtime. Even though the behavior is not conforming to any ISO C standard, it won't be fixed. From the bug report:

However, we have opted to avoid reimplementing clock() in such a way that it might return time values advancing faster than one second per physical second, as this change would silently break programs depending on the previous behavior (and we expect there are many such programs).

like image 73
cremno Avatar answered Sep 30 '22 00:09

cremno


If we look at the source code for clock() on Mac OS X, we see that it is implemented using getrusage, and reads ru_utime + ru_stime. These two fields measure CPU time used by the process (or by the system, on behalf of the process). This means that if usleep (or fgets) causes the OS to swap in a different program for execution until something happens, then any amount of real time (also called "wall time", as in "wall clock") elapsed does not count against the value that clock() returns on Mac OS X. You could probably dig in and find something similar in Linux.

On Windows, however, clock() returns the amount of wall time elapsed since the start of the process.

In pure C, I am not aware of a function available on OS X, Linux and Windows that will return wall time with a sub-second precision (time.h being fairly limited). You have GetSystemTimeAsFileTime on Windows that will return you time in slices of 100ns, and gettimeofday from BSD that will return time to a microsecond precision.

If second-precision is acceptable to you, you could use time(NULL).

If C++ is an option, you could use one of the clocks from std::chrono to get time to the desired precision.

like image 39
zneak Avatar answered Sep 30 '22 00:09

zneak