I am modifying legacy code that utilizes a "long long" (LL
) data type definition for a hard-coded constant, as follows:
0xFFFFFFFFFFFFFFFFLL
I trust that the LL
appended to the constant guarantees that this constant will be interpreted as a long long
.
However, I do not want to depend on long long
having any particular compiler-dependent interpretation in terms of the number of bits.
Therefore, I would like my variable declaration to do without the LL
in the constant, and instead use:
uint64_t a = static_cast<uint64_t>(0xFFFFFFFFFFFFFFFF);
I would like to think that the constant 0xFFFFFFFFFFFFFFFF
is not interpreted by the compiler as a 32-bit integer BEFORE the cast to uint64_t
, which would result in a
being a 64-bit integer that contained the value 0xFFFFFFFF
, rather than the desired value.
(My current 64-bit compilers of interest are VS 2010, and Ubuntu 12.04 LTS GCC. However, I would hope that this code behaves in the desired way for any modern compiler.)
Will the above code work as desired for most or all modern compilers, so the the value of a
is properly set to include all digits, as desired, from the constant 0xFFFFFFFFFFFFFFFF
, WITHOUT including the LL
at the end of the constant?
(Note: Including I64
at the end of the constant gives a compiler error. Perhaps there is another token that needs (or can) be included at the end of the constant to tell the compiler to interpret the constant as a 64-bit integer?)
(Also: Perhaps even the static_cast<uint64_t>
is unnecessary, since the variable is explicitly being defined as uint64_t?)
To reduce what Andy says to the essentials: if the implementation has one or more standard integer types that is capable of representing 0xFFFFFFFFFFFFFFFF
, then the literal 0xFFFFFFFFFFFFFFFF
has one of those types.
It doesn't really matter to you which one, since no matter which it is, the result of the conversion to uint64_t
is the same.
If the (pre-C++11) implementation doesn't have any integer type big enough, then (a) the program is ill-formed, so you should get a diagnostic; and (b) it probably won't have uint64_t
anyway.
You are correct that the static_cast
is unnecessary. It does the same conversion that assigning to uint64_t
would do anyway. Sometimes a cast will suppress compiler warnings that you get for certain implicit integer conversions, but I think it's unlikely that any compiler would warn for an implicit conversion in this case. Often there won't be one, since 0xFFFFFFFFFFFFFFFF
will commonly have type uint64_t
already.
As an aside, it's probably better to write static_cast<uint64_t>(-1)
, or just uint64_t a = -1;
. It's guaranteed to be equal to 0xFFFFFFFFFFFFFFFF
, but it's much easier for a reader to see the difference between -1
and 0xFFFFFFFFFFFFFFF
than it is to see the difference between 0xFFFFFFFFFFFFFFFF
and 0xFFFFFFFFFFFFFFF
.
Per Paragraph 2.1.14/2 of the C++11 Standard:
The type of an integer literal is the first of the corresponding list in Table 6 in which its value can be represented
Table 6 specifies that for hexadecimal literal constants, the type of the literal should be:
int
; or (if it doesn't fit)unsigned int
; or (if it doesn't fit)long int
; or (if it doesn't fit)unsigned long int
; or (if it doesn't fit)long long int
; or (if it doesn't fit)unsigned long long int
.If we make the reasonable assumption that 0xFFFFFFFFFFFFFFFF
will not fit in any of the first 5 types from the above list, its type should be unsigned long long int
. As long as you are working with a 64 bit compiler, it is reasonable to assume that values of this type will have a size of 64 bit, and the constant will be interpreted as a 64-bit unsigned long long int
, as you hoped.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With