I'm a little confused about loss information on numeric types in C#.
When i do this:
int x = 32780;
short y = (short)x;
I have the result: -32756 for y and not the expected 32767. Why? How this is calculated?
Range of short: -32768 to 32767 Range of int: -2,147,483,648 to 2,147,483,647
You seem to be expecting a “rounding down” effect rather than what is actually happening, which is a bitwise re-interpretation of the data.
In binary, x
is equal to 00000000000000001000000000001100
, which is a 32-bit number with only 16 significant bits. A short
is a 16-bit signed integer which is represented using two’s complement notation.
When you convert, the last 16 bits of your x
are being copied into y
, giving 1000000000001100
. Importantly here, the first digit is a 1
. In two’s-complement notation, this is -32756
. Your number hasn't been rounded—it’s been read as though it were 16-bit.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With