While answering a question on sizeof(), just to see how GCC handles, I wrote the following code:
#include<stdio.h>
#include<stddef.h>
#include<limits.h>
int main(int ac, char *argv[])
{
printf("%zu\n", sizeof(9999999999999999999999999999999999999999999999999999) );
printf("%zu %zu \n", sizeof(int), sizeof(long long));
return 0;
}
When compiled, GCC (4.1.2) issued a warning (as expected):
t.c:8:24: warning: integer constant is too large for its type
t.c: In function main:
t.c:8: warning: integer constant is too large for long type
And the output is:
16
4 8
How does GCC say that sizeof(9999999999999999999999999999999999999999999999999999)
is 16 ?! No matter how big numnber
is, it's always 16 for any integer literal greater than LLONG_MAX
. On my 64-bit platform sizeof(long)
is equal to sizeof(long long)
.
Why does GCC behave this way? Is it some sort of undefined behaviour?!
The sizeof operator gives the amount of storage, in bytes, required to store an object of the type of the operand. This operator allows you to avoid specifying machine-dependent data sizes in your programs.
One of the most common uses for the sizeof operator is to determine the size of objects that are referred to during storage allocation, input, and output functions. Another use of sizeof is in porting code across platforms. You can use the sizeof operator to determine the size that a data type represents.
It is a compile-time unary operator and used to compute the size of its operand. It returns the size of a variable. It can be applied to any data type, float type, pointer type variables. When sizeof() is used with the data types, it simply returns the amount of memory allocated to that data type.
Answer: sizeof returns the size of the type in bytes. Example: sizeof(char) is 100% guaranteed to be 1 , but this does not mean, that it's one octet (8 bits). Proved by the standard: The sizeof operator yields the size (in bytes) of its operand, which may be an expression or the parenthesized name of a type.
gcc has a special non-standard type called __int128
, which is a 128bit (16 byte) integer. So sizeof(__int128)
will return 16. It seams like your ultra-large constant is treated like this __int128
type. Consider the following code:
typeof(9999999999999999999999999999999999999999999999999999) (*funcptr_a)();
unsigned __int128 (*funcptr_b)();
void dummy() {
funcptr_a = funcptr_b;
}
If I change any of the types in the declarations of funcptr_a and funcptr_b, the assignment funcptr_a = funcptr_b;
triggers a warning. I don't get a warning (gcc 4.6.3 on 64-bit Linux) for this variation, therefore I know the type of the large integer constant is unsigned __int128
.
Btw, with clang 3.0 (also 64-bit Linux) your code outputs
8
4 8
I'd say this is not undefined but an implementation defined behavior. To quote the C99 standard (Sec. 6.4.4.1, page 56):
[...] If an integer constant cannot be represented by any type in its list, it may have an extended integer type, if the extended integer type can represent its value. [..]
We can ask gcc itself:
__typeof__ (9999999999999999999999999999999999999999999999999999) var = 1;
printf("%lld\n", var);
sizes.c:10:5: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 2 has type ‘__int128’ [-Wformat]
So gcc chooses - if supported - the type __int128
for the too large decimal constant.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With