In a 16 Bit C compiler we have 2 bytes to store an integer, and 1 byte for a character. For unsigned integers the range is 0 to 65535. For signed integers the range is -32768 to 32767. For unsigned character, 0 to 255. According to the integer type, shouldn't the signed character range be like -128 to 127. But why -127 to 127? What about the remaining one bit?
A 16-bit integer can store 216 (or 65,536) distinct values. In an unsigned representation, these values are the integers between 0 and 65,535; using two's complement, possible values range from −32,768 to 32,767.
Fundamental Data Types A byte is eight bits, a word is 2 bytes (16 bits), a doubleword is 4 bytes (32 bits), and a quadword is 8 bytes (64 bits).
The range of values is from 0 to 2n – 1, for example 0 to 28 – 1 = 0—255.
A signed integer is a 32-bit datum that encodes an integer in the range [-2147483648 to 2147483647]. An unsigned integer is a 32-bit datum that encodes a nonnegative integer in the range [0 to 4294967295]. The signed integer is represented in twos complement notation.
I think you're mixing two things:
signed char
, int
etc.These don't necessarily have to be the same as long as the range implemented is a superset of the range required by the standard.
According to the C standard, the implementation-defined values of SCHAR_MIN
and SCHAR_MAX
shall be equal or greater in magnitude (absolute value) to, and of the same sign as:
SCHAR_MIN -127
SCHAR_MAX +127
i.e. only 255 values, not 256.
However, the limits defined by a compliant implementation can be 'greater' in magnitude than these. i.e. [-128,+127]
is allowed by the standard too. And since most machines represent numbers in the 2's complement form, [-128,+127]
is the range you will get to see most often.
Actually, even the minimum range of int
defined by the C standard is symmetric about zero. It is:
INT_MIN -32767
INT_MAX +32767
i.e. only 65535 values, not 65536.
But again, most machines use 2's complement representation, and this means that they offer the range [-32768,+32767]
.
While in 2's complement form it is possible to represent 256 signed values in 8 bits (i.e. [-128,+127]
), there are other signed number representations where this is not possible.
In the sign-magnitude representation, one bit is reserved for the sign, so:
00000000
10000000
both mean the same thing, i.e. 0
(or rather, +0
and -0
).
This means, one value is wasted. And thus sign-magnitude representation can only hold values from -127 (11111111
) to +127 (01111111
) in 8 bits.
In the one's complement representation (negate by doing bitwise NOT):
00000000
11111111
both mean the same thing, i.e. 0
.
Again, only values from -127 (10000000
) to +127 (01111111
) can be represented in 8 bits.
If the C standard required the range to be [-128,+127]
, then this would essentially exclude machines using such representations from being able to efficiently run C programs. They would require an additional bit to represent this range, thus needing 9 bits to store signed characters instead of 8. The logical conclusion based on the above is: This is why the C standard requires [-127,+127]
for signed characters. i.e. to allow implementations the freedom to choose a form of integer representation that suits their needs and at the same time be able to adhere to the standard in an efficient way. The same logic applies to int
as well.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With