Why is (int64_t)-1 + (uint32_t)0
signed in C? It looks like it's int64_t
, but my intuition would say uint64_t
.
FYI When I run
#include <stdint.h>
#include <stdio.h>
#define BIT_SIZE(x) (sizeof(x) * 8)
#define IS_UNSIGNED(x) ((unsigned)(((x) * 0 - 1) >> (BIT_SIZE(x) - 1)) < 2)
#define DUMP(x) dump(#x, IS_UNSIGNED(x), BIT_SIZE(x))
static void dump(const char *x_str, int is_unsigned, int bit_size) {
printf("%s is %sint%d_t\n", x_str, "u" + !is_unsigned, bit_size);
}
int main(int argc, char **argv) {
(void)argc; (void)argv;
DUMP(42);
DUMP(42U);
DUMP(42L);
DUMP(42UL);
DUMP(42LL);
DUMP(42ULL);
DUMP('x');
DUMP((char)'x');
DUMP(1 + 2U);
DUMP(1 << 2U);
DUMP((int32_t)-1 + (uint64_t)0);
DUMP((int64_t)-1 + (uint32_t)0);
return 0;
}
I get the following output:
42 is int32_t
42U is uint32_t
42L is int32_t
42UL is uint32_t
42LL is int64_t
42ULL is uint64_t
'x' is int32_t
(char)'x' is int8_t
1 + 2U is uint32_t
1 << 2U is int32_t
(int32_t)-1 + (uint64_t)0 is uint64_t
(int64_t)-1 + (uint32_t)0 is int64_t
uint32_t is a numeric type that guarantees 32 bits. The value is unsigned, meaning that the range of values goes from 0 to 232 - 1.
uint32_t = unsigned integer 32-bit type.
The UInt32 value type represents unsigned integers with values ranging from 0 to 4,294,967,295. Important. The UInt32 type is not CLS-compliant. The CLS-compliant alternative type is Int64. Int32 can be used instead to replace a UInt32 value that ranges from zero to MaxValue.
A UINT32 is a 32-bit unsigned integer (range: 0 through 4294967295 decimal).
Why is (int64_t)-1 + (uint32_t)0 signed?
Because int64_t
conversion rank is greater than uin32_t
conversion rank. (uint32_t)0
is converted to int64_t
in the +
expression and int64_t
is the type of the resulting expression.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With