Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why C standard defines range of data types one short?

Tags:

c

The C99 standard defines the range of data types in the following manner:

— minimum value for an object of type signed char
SCHAR_MIN -127 // −(2^7 − 1)
— maximum value for an object of type signed char
SCHAR_MAX +127 // 2^7 − 1
— maximum value for an object of type unsigned char
UCHAR_MAX 255 // 2^8 − 1
— minimum value for an object of type char
CHAR_MIN see below
— maximum value for an object of type char
CHAR_MAX see below
— maximum number of bytes in a multibyte character, for any supported locale
MB_LEN_MAX 1
— minimum value for an object of type short int
SHRT_MIN -32767 // −(2^15 − 1)
— maximum value for an object of type short int
SHRT_MAX +32767 // 2^15 − 1
— maximum value for an object of type unsigned short int
USHRT_MAX 65535 // 2^16 − 1
— minimum value for an object of type int
INT_MIN -32767 // −(2^15 − 1)
— maximum value for an object of type int
INT_MAX +32767 // 2^15 − 1
— maximum value for an object of type unsigned int
UINT_MAX 65535 // 2^16 − 1
— minimum value for an object of type long int
LONG_MIN -2147483647 // −(2^31 − 1)
— maximum value for an object of type long int
LONG_MAX +2147483647 // 2^31 − 1
— maximum value for an object of type unsigned long int
ULONG_MAX 4294967295 // 2^32 − 1

If we see the negative range, it can be actually one more than what is defined here as per allowable two's compliment representations. Why they are defined like this ?

like image 769
bubble Avatar asked Oct 18 '12 14:10

bubble


People also ask

What is the range of data type short?

short: The short data type is a 16-bit signed two's complement integer. It has a minimum value of -32,768 and a maximum value of 32,767 (inclusive).

What is data type short in C?

The short data type takes 2 bytes of storage space; int takes 2 or 4 bytes, and long takes 8 bytes in 64-bit and 4 bytes in the 32-bit operating system. If you try to assign a decimal value to the integer variable, the value after the decimal will be truncated, and only the whole number gets assigned to the variable.


1 Answers

If we see the negative range, it can be actually one more than what is defined here as per allowable two's complement representations. Why they are defined like this ?

Because C is also designed for old (and new!) architectures, which don't necessarily use two's complement representation for signed integers. Three representations are indeed allowed by the C11 standard (which of these applies is implementation-defined):

§ 6.2.6.2 Integer types

If the sign bit is one, the value shall be modified in one of the following ways:

— the corresponding value with sign bit 0 is negated (sign and magnitude)
— the sign bit has the value −(2M ) (two’s complement);
— the sign bit has the value −(2M − 1) (ones’ complement).

So, with ones' complement representation, the minimum value is -(2^M - 1). However, there is an exception: the C99 optional types intxx_t, which are guaranted to be stored with two's complement representation (and that's why there are optional: C standard doesn't force this representation).

like image 129
md5 Avatar answered Sep 23 '22 02:09

md5