The time_t datatype is a data type in the ISO C library defined for storing system time values. Such values are returned from the standard time() library function. This type is a typedef defined in the standard <time. h> header.
The C library function time_t time(time_t *seconds) returns the time since the Epoch (00:00:00 UTC, January 1, 1970), measured in seconds. If seconds is not NULL, the return value is also stored in variable seconds.
Time type. Alias of a fundamental arithmetic type capable of representing times, as those returned by function time . For historical reasons, it is generally implemented as an integral value representing the number of seconds elapsed since 00:00 hours, Jan 1, 1970 UTC (i.e., a unix timestamp).
The GNU C Library additionally guarantees that time_t is a signed type, and that all of its functions operate correctly on negative time_t values, which are interpreted as times before the epoch.
The time_t Wikipedia article article sheds some light on this. The bottom line is that the type of time_t
is not guaranteed in the C specification.
The
time_t
datatype is a data type in the ISO C library defined for storing system time values. Such values are returned from the standardtime()
library function. This type is a typedef defined in the standard header. ISO C defines time_t as an arithmetic type, but does not specify any particular type, range, resolution, or encoding for it. Also unspecified are the meanings of arithmetic operations applied to time values.Unix and POSIX-compliant systems implement the
time_t
type as asigned integer
(typically 32 or 64 bits wide) which represents the number of seconds since the start of the Unix epoch: midnight UTC of January 1, 1970 (not counting leap seconds). Some systems correctly handle negative time values, while others do not. Systems using a 32-bittime_t
type are susceptible to the Year 2038 problem.
[root]# cat time.c
#include <time.h>
int main(int argc, char** argv)
{
time_t test;
return 0;
}
[root]# gcc -E time.c | grep __time_t
typedef long int __time_t;
It's defined in $INCDIR/bits/types.h
through:
# 131 "/usr/include/bits/types.h" 3 4
# 1 "/usr/include/bits/typesizes.h" 1 3 4
# 132 "/usr/include/bits/types.h" 2 3 4
Standards
William Brendel quoted Wikipedia, but I prefer it from the horse's mouth.
C99 N1256 standard draft 7.23.1/3 "Components of time" says:
The types declared are size_t (described in 7.17) clock_t and time_t which are arithmetic types capable of representing times
and 6.2.5/18 "Types" says:
Integer and floating types are collectively called arithmetic types.
POSIX 7 sys_types.h says:
[CX] time_t shall be an integer type.
where [CX]
is defined as:
[CX] Extension to the ISO C standard.
It is an extension because it makes a stronger guarantee: floating points are out.
gcc one-liner
No need to create a file as mentioned by Quassnoi:
echo | gcc -E -xc -include 'time.h' - | grep time_t
On Ubuntu 15.10 GCC 5.2 the top two lines are:
typedef long int __time_t;
typedef __time_t time_t;
Command breakdown with some quotes from man gcc
:
-E
: "Stop after the preprocessing stage; do not run the compiler proper."-xc
: Specify C language, since input comes from stdin which has no file extension.-include file
: "Process file as if "#include "file"" appeared as the first line of the primary source file."-
: input from stdinThe answer is definitely implementation-specific. To find out definitively for your platform/compiler, just add this output somewhere in your code:
printf ("sizeof time_t is: %d\n", sizeof(time_t));
If the answer is 4 (32 bits) and your data is meant to go beyond 2038, then you have 25 years to migrate your code.
Your data will be fine if you store your data as a string, even if it's something simple like:
FILE *stream = [stream file pointer that you've opened correctly];
fprintf (stream, "%d\n", (int)time_t);
Then just read it back the same way (fread, fscanf, etc. into an int), and you have your epoch offset time. A similar workaround exists in .Net. I pass 64-bit epoch numbers between Win and Linux systems with no problem (over a communications channel). That brings up byte-ordering issues, but that's another subject.
To answer paxdiablo's query, I'd say that it printed "19100" because the program was written this way (and I admit I did this myself in the '80's):
time_t now;
struct tm local_date_time;
now = time(NULL);
// convert, then copy internal object to our object
memcpy (&local_date_time, localtime(&now), sizeof(local_date_time));
printf ("Year is: 19%02d\n", local_date_time.tm_year);
The printf
statement prints the fixed string "Year is: 19" followed by a zero-padded string with the "years since 1900" (definition of tm->tm_year
). In 2000, that value is 100, obviously. "%02d"
pads with two zeros but does not truncate if longer than two digits.
The correct way is (change to last line only):
printf ("Year is: %d\n", local_date_time.tm_year + 1900);
New question: What's the rationale for that thinking?
Under Visual Studio 2008, it defaults to an __int64
unless you define _USE_32BIT_TIME_T
. You're better off just pretending that you don't know what it's defined as, since it can (and will) change from platform to platform.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With