C# has various value types, and each serves their own purpose. Int32 ranges from -(0x7FFFFFFF + 1) to 0x7FFFFFFF, and from every machine I've ever run it, it seems that unchecked((int)0xFFFFFFFF) always got me a resulting value of -1. Is this always the case? Furthermore, does .NET always represent -1 as 0xFFFFFFFF in memory on any system? Is the leading bit always the sign bit? Does it always use the Two's Complement signed binary representation for integers?
The documentation for System.Int32 explicitly states that it is stored in two's compliment form. It's at the very bottom:
In addition to working with individual integers as decimal values, you may want to perform bitwise operations with integer values, or work with the binary or hexadecimal representations of integer values. Int32 values are represented in 31 bits, with the thirty-second bit used as a sign bit. Positive values are represented by using sign-and-magnitude representation. Negative values are in two's complement representation. This is important to keep in mind when you perform bitwise operations on Int32 values or when you work with individual bits. In order to perform a numeric, Boolean, or comparison operation on any two non-decimal values, both values must use the same representation.
So it appears that the answer to all of your questions are yes.
Also the range for an Int32 is from -(0x80000000) to 0x7FFFFFFFF.
C# — as well as pretty much every other 'puter on the planet — represents integers in 2's complement notation. I believe that at one point or another, there have been CPUs designed that used other representation, but these days, you can pretty reliably depend on integers being represented in 2's complement notation.
we count bits from right to left, with the rightmost bit, bit 0 being the least significant bit and the leftmost bit being the most significant.
the high-order (leftmost) bit is the sign: 0 is positive; 1 is negative.
The remaining bits carry the value. That means the valid domain of an integer of size N bits is -(2n-1) <= x <= +(2n-1-1). Which is to say, one more negative number can be represented than can be positive numbers: for a 16-bit signed integer, the domain is -32,768 to +32,767.
To put a number into two's complement is easy:
So, the value +1 is represented as 0x0001, while -1 is represented as
Or 0xFFFF
The reason for two's complement notation is that it makes CPU design simpler: since subtraction is the addition of the negative (e.g. 3-2 is the same as 3 + -2), they don't have to design subtraction circuitry:
1-1 is the same as 1 + -1 and evaluates to zero.
or in hex,
0x0001 (decimal +1)
+ 0xFFFF (decimal -1)
======
0x0000 (decimal 0)
On most CPUs, the carry into or out of the high order bit sets the fixed point overflow flag.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With