I was thinking on data types ranges, a question arises. As we know signed char's range is from -128 to 127. I got the how 127 comes, i.e. 0111111 = +127
But I could not get how -128 comes? if we just ON sign bit we get 11111111, how its is equal to -128 ?
Most of the time, computers use what's called 2's complement to represent signed integers.
The way 2's complement works is that the possible values are in a huge loop, from 0, to MAX_VALUE, to MIN_VALUE, to zero, and so on.
So the minimum value is the maximum value +1 - 01111111 = 127
, and 10000000 = -128
.
This has the nice property of behaving exactly the same as unsigned arithmetic - if I want to do -2 + 1
, I have 11111110 + 00000001 = 11111111 = -1
, using all the same hardware as for unsigned addition.
The reason there's an extra value on the low end is that we choose to have all numbers with the high-bit set be negative, which means that 0 takes a value away from the positive side.
In two's complement, -128 is 10000000.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With