According to this, integer literals without type suffix are always int
s. However, both gcc and clang interpret 0xffffffff
(or any literal which explicitly sets the sign bit other than using the -
) as unsigned. Which is correct? (according to this the compilers are)
Per Paragraph 2.14.2/2 of the C++11 Standard,
The type of an integer literal is the first of the corresponding list in Table 6 in which its value can be represented.
Table 6 reports that for hexadecimal constants, the type should be:
int
; or (if it doesn't fit)unsigned int
; or (if it doesn't fit)long int
; or (if it doesn't fit)unsigned long int
; or (if it doesn't fit)long long int
; or unsigned long long int
.Assuming your implementation has 32-bit int
, since 0xffffffff
does not fit in an int
, its type should be unsigned int
. For an implementation with a 64-bit int
, the type would be int
.
Notice, that if you had written the same literal as a decimal constant instead, the type could have only been:
int
; or (if it doesn't fit)long int
; or (if it doesn't fit)long long int
.If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With