Does the C++ standard guarantee whether integer conversion that both widens and casts away the sign will sign-extend or zero-extend?
The quick test:
int32_t s = -1;
uint64_t u = s;
produces an 0xFFFFFFFFFFFFFFFF under Xcode, but is that a defined behavior in the first place?
When you do
uint64_t u = s;
[dcl.init]/17.9 applies which states:
the initial value of the object being initialized is the (possibly converted) value of the initializer expression. A standard conversion sequence ([conv]) will be used, if necessary, to convert the initializer expression to the cv-unqualified version of the destination type; no user-defined conversions are considered.
and if we look in [conv], under integral conversions, we have
Otherwise, the result is the unique value of the destination type that is congruent to the source integer modulo 2N, where N is the width of the destination type.
So what you are guaranteed to have happen is that -1 becomes the largest number possible to represent, -2 is one less then that, -3 is one less then -2 and so on, basically it "wraps around".
In fact,
unsigned_type some_name = -1;
Is the canonical way to create a variable with the maximum value for that unsigned integer type.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With