I understand that the 'natural size' is the width of integer that is processed most efficiently by a particular hardware. When using short
in an array or in arithmetic operations, the short
integer must first be converted into int
.
Q: What exactly determines this 'natural size'?
I am not looking for simple answers such as
If it has a 32-bit architecture, it's natural size is 32-bit
I want to understand why this is most efficient, and why a short
must be converted before doing arithmetic operations on it.
Bonus Q: What happens when arithmetic operations are conducted on a long
integer?
"The sizes of short, int, and long in C/C++ are dependent upon the implementation of the language; dependent on data model, even short can be anything from 16-bit to 64-bit. For some common platforms: On older, 16-bit operating systems, int was 16-bit and long was 32-bit.
The size of a signed int or unsigned int item is the standard size of an integer on a particular machine. For example, in 16-bit operating systems, the int type is usually 16 bits, or 2 bytes. In 32-bit operating systems, the int type is usually 32 bits, or 4 bytes.
On a 32-bit Machine, sizeof(int*) will return a value 4 because the address value of memory location on a 32-bit machine is 4-byte integers. Similarly, on a 64-bit machine it will return a value of 8 as on a 64-bit machine the address of a memory location are 8-byte integers.
The size of int is usually 4 bytes (32 bits). And, it can take 232 distinct states from -2147483648 to 2147483647 .
Generally speaking, each computer architecture is designed such that certain type sizes provide the most efficient numerical operations. The specific size then depends on the architecture, and the compiler will select an appropriate size. More detailed explanations as to why hardware designers selected certain sizes for perticular hardware would be out of scope for stckoverflow.
A short
most be promoted to int
before performing integral operations because that's the way it was in C, and C++ inherited that behavior with little or no reason to change it, possibly breaking existing code. I'm not sure the reason it was originally added in C, but one could speculate that it's related to "default int" where if no type were specified int
was assumed by the compiler.
Bonus A: from 5/9 (expressions) we learn: Many binary operators that expect operands of arithmetic or enumeration type cause conversions and yield result types in a similar way. The purpose is to yield a common type, which is also the type of the result. This pattern is called the usual arithmetic conversions, which are defined as follows:
And then of interest specifically:
Otherwise, the integral promotions (4.5) shall be performed on both operands
Then, if either operand is unsigned long the other shall be converted to unsigned long.
Otherwise, if one operand is a long int and the other unsigned int, then if a long int can represent
all the values of an unsigned int, the unsigned int shall be converted to a long int;
otherwise both operands shall be converted to unsigned long int.
Otherwise, if either operand is long, the other shall be converted to long.
In summary the compiler tries to use the "best" type it can to do binary operations, with int
being the smallest size used.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With