Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What’s the correct way to use printf to print a clock_t?

Tags:

c

io

printf

clock

I'm currently using a explicit cast to unsigned long long and using %llu to print it, but since size_t has the %z specifier, why doesn't clock_t have one?

There isn't even a macro for it. Maybe I can assume that on an x64 system (OS and CPU) size_t is 8 bytes in length (and even in this case, they have provided %z), but what about clock_t?

like image 501
Spidey Avatar asked Jul 04 '09 23:07

Spidey


People also ask

What type is clock_t?

clock_t is used to measure processor and CPU time. It may be an integer or a floating-point type. Its values are counts of clock ticks since some arbitrary event in the past. The number of clock ticks per second is system-specific.

What is CLOCKS_PER_SEC in C?

CLOCKS_PER_SEC is a macro in C language and is defined in the <time.h> header file. It is an expression of type, as shown below: clock_t clock(void) CLOCKS_PER_SEC defines the number of clock ticks per second for a particular machine.

Is clock_ t unsigned?

glibc defines clock_t as signed long rather than unsigned long. On 32-bit targets where the value can wrap, this makes it impossible to subtract clock_t values to measure intervals (signed overflow results in undefined behavior).


2 Answers

There seems to be no perfect way. The root of the problem is that clock_t can be either integer or floating point.

clock_t can be a floating point type

As Bastien Léonard mentions for POSIX (go upvote him), C99 N1256 draft 7.23.1/3 also says that:

[clock_t is] arithmetic types capable of representing times

and 6.2.5/18:

Integer and floating types are collectively called arithmetic types.

and the standard defines arithmetic type as either integers or floating point types.

If you will divide by CLOCKS_PER_SEC, use long double

The return value of clock() is implementation defined, and the only way to get standard meaning out of it is to divide by CLOCKS_PER_SEC to find the number of seconds:

clock_t t0 = clock(); /* Work. */ clock_t t1 = clock(); printf("%Lf", (long double)(t1 - t0)); 

This is good enough, although not perfect, for the two following reasons:

  • there seems to be no analogue to intmax_t for floating point types: How to get the largest precision floating point data type of implemenation and its printf specifier? So if a larger floating point type comes out tomorrow, it could be used and break your implementation.

  • if clock_t is an integer, the cast to float is well defined to use the nearest float possible. You may lose precision, but it would not matter much compared to the absolute value, and would only happen for huge amounts of time, e.g. long int in x86 is the 80-bit float with 64-bit significant, which is millions of years in seconds.

Go upvote lemonad who said something similar.

If you suppose it is an integer, use %ju and uintmax_t

Although unsigned long long is currently the largest standard integer type possible:

  • a larger one could come out in the future
  • the standard already explicitly allows larger implementation defined types (kudos to @FUZxxl) and clock_t could be one of them

so it is best to typecast to the largest unsigned integer type possible:

#include <stdint.h>  printf("%ju", (uintmax_t)(clock_t)1); 

uintmax_t is guaranteed to have the size of the largest possible integer size on the machine.

uintmax_t and its printf specifier %ju were introduced in c99 and gcc for example implements them.

As a bonus, this solves once and for all the question of how to reliably printf integer types (which is unfortunately not the necessarily the case for clock_t).

What could go wrong if it was a double:

  • if too large to fit into the integer, undefined behavior
  • much smaller than 1, will get rounded to 0 and you won't see anything

Since those consequences are much harsher than the integer to float conversion, using float is likely a better idea.

On glibc 2.21 it is an integer

The manual says that using double is a better idea:

On GNU/Linux and GNU/Hurd systems, clock_t is equivalent to long int and CLOCKS_PER_SEC is an integer value. But in other systems, both clock_t and the macro CLOCKS_PER_SEC can be either integer or floating-point types. Casting CPU time values to double, as in the example above, makes sure that operations such as arithmetic and printing work properly and consistently no matter what the underlying representation is.

In glibc 2.21:

  • clock_t is long int:

    • time/time.h sets it to __clock_t
    • bits/types.h sets it to __CLOCK_T_TYPE
    • bits/typesizes.h sets it to __SLONGWORD_TYPE
    • bits/types.h sets it to long int
  • clock() in Linux is implemented with sys_clock_gettime:

    • sysdeps/unix/sysv/linux/clock.c calls __clock_gettime
    • sysdeps/unix/clock_gettime.c calls SYSDEP_GETTIME_CPU
    • sysdeps/unix/sysv/linux/clock_gettime.c calls SYSCALL_GETTIME which finally makes an inline system call

    man clock_gettime, tells us that it returns a struct timespec which in GCC contains long int fields.

    So the underlying implementation really returns integers.

See also

  • How to print types of unknown size like ino_t?
  • How to use printf to display off_t, nlink_t, size_t and other special types?

As far as I know, the way you're doing is the best. Except that clock_t may be a real type:

time_t and clock_t shall be integer or real-floating types.

http://www.opengroup.org/onlinepubs/009695399/basedefs/sys/types.h.html

like image 34
Bastien Léonard Avatar answered Oct 13 '22 19:10

Bastien Léonard