Reading Stanley Lippman's "C++ Primer", I learned that by default decimal integer literals are signed (smallest type of int
, long
or long long
in which the literal's value fits) whereas octal and hexadecimal literals can be either signed or unsigned (smallest type of int
, unsigned int
, long
, unsigned long
, long long
or unsigned long long
in which the literal's value fits) .
What's the reason for treating those literals differently?
Edit: I'm trying to provide some context
int main()
{
auto dec = 4294967295;
auto hex = 0xFFFFFFFF;
return 0;
}
Debugging following code in Visual Studio shows that the type of dec
is unsigned long
and that the type of hex
is unsigned int
.
This contradicts what I've read but still: both variables represent the same value but are of different types. That's confusing me.
By default a decimal integer literal has the int type . If it is too large to fit in the int type , then it is checked against the long type . If it is too large to fit in the long type then it is checked against the long long type .
hexadecimal : The hexadecimal integer literal must start with 0x or 0X , for example 0xAA . Decimal : starts with a non zero digit .
A hexadecimal integer literal begins with the 0 digit followed by either an x or X, followed by any combination of the digits 0 through 9 and the letters a through f or A through F. The letters A (or a) through F (or f) represent the values 10 through 15, respectively. Hexadecimal integer literal syntax .------------------.
The integer literals can be of the int , long , and long long type , and they can be either signed or unsigned . There are three kinds of integer literals in c , they are the : decimal : The decimal integer literal must start with a non zero digit , for example 1 . octal : The octal integer literal must start with a 0 digit , for example 07 .
C++.2011 changed its promotions rules from C++.2003. This change is documented in §C.2.1 [diff.cpp03.lex] :
2.14.2
Change: Type of integer literals
Rationale: C99 compatibility
The C Standard, both C.1999 and C.2011, defines the conversions in §6.4.4.1. (C++.2011 §2.14.2 essentially copies the content from the C Standard.)
The type of an integer constant is the first of the corresponding list in which its value can be represented.
larger image
The C.1999 rationale gives the following explanation:
The C90 rule that the default type of a decimal integer constant is either
int
,long
, orunsigned long
, depending on which type is large enough to hold the value without overflow, simplifies the use of constants. The choices in C99 areint
,long
andlong long
. C89 added the suffixesU
andu
to specify unsigned numbers. C99 addsLL
to specifylong long
.Unlike decimal constants, octal and hexadecimal constants too large to be
int
s are typed asunsigned int
if within range of that type, since it is more likely that they represent bit patterns or masks, which are generally best treated as unsigned, rather than “real” numbers.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With