Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does -INT_MIN = INT_MIN in a signed, two's complement representation?

I still haven't found a reason why the lowest signed negative number doesn't have an equivalent signed positive number? I mean in a 3 digit binary number for simplicity 100 is -4? but we can't have a positive 4 in signed format because we can't. It overflows. So how do we know two's complement 1000 is -4 1000 0000 is -128 and so on? We have no original positive number

like image 559
Lews Therin Avatar asked Jan 18 '12 20:01

Lews Therin


1 Answers

One way to think about it is that signed, two's complement format works by assigning each bit a power of two, then flipping the sign of the last power of two. Let's look at -4, for example, which is represented as 100. This means that the value is

-1 x 2^2 + 0 x 2^1 + 0 x 2^0

If we want to get the positive version of this value, we'd have to negate it to get

 1 x 2^2 - 0 x 2^1 - 0 x 2^0

Notice that this value is equal to

 1 x 2^2 + 0 x 2^1 + 0 x 2^0

In other words, the normal binary representation of this value is 100. However, we're in trouble here, because we're using a signed two's complement representation, which means that we have specifically reserved the 4's bit as the sign bit. Consequently, when we try to interpret the bit pattern 100 as a signed, three-bit, two's complement value, it comes back out identically to what we started with. The shortage of bits is what's hurting here.

More generally, given n bits, of which the first is the sign bit in a two's complement representation, trying to compute -1000...00 will give back the same value, because the bit needed to store the large positive value has the special meaning assigned to it.

So why do this at all? The reason for this is that if you have only n bits, you cannot store the values -2n - 1 through 2n - 1, because there are 2n + 1 different numbers here and only 2^n different bit patterns. Excluding the largest positive number thus makes it possible to hold all the different numbers in the bit pattern specified.

But why drop the high value and not the low value? This is in order to preserve binary compatibility with unsigned integers. In an unsigned integer, the values 0 through 2n-1 - 1 are all encoded using the standard base-two representation. Consequently, for unsigned and signed integers to agree at all, the unsigned integers are designed so that they are bit-for-bit equivalent with the first 2n - 1 unsigned integers, which range from 0 to 2n - 1 - 1, inclusive. After this, the unsigned values need the most significant bit to encode numbers, but the signed values are using this as the sign bit.

Hope this helps!

like image 195
templatetypedef Avatar answered Sep 28 '22 14:09

templatetypedef