I know the standard says if the integer literal does not fit the int, it tries unsigned int, and so forth, per section 2.14.2 Table 6 in the standard.
My question is: what's the criteria to determine it fits or not?
Why do both std::is_signed<decltype(0xFFFFFFFF)>::value std::is_signed<decltype(0x80000000)>::value gives false. Why don't they fit in int? 0x80000000 has the same bit representation as signed -1signed -2147483648.
You don't need to look at "bit representation" to check if the number fits or not.
Assuming sizeof(int) == 4, int can represent numbers from -231 to 231-1 inclusive.
0x80000000 is 231, which is 1 larger than the maximum value.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With