Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What does 'Natural Size' really mean in C++?

I understand that the 'natural size' is the width of integer that is processed most efficiently by a particular hardware. When using short in an array or in arithmetic operations, the short integer must first be converted into int.

Q: What exactly determines this 'natural size'?

I am not looking for simple answers such as

If it has a 32-bit architecture, it's natural size is 32-bit

I want to understand why this is most efficient, and why a short must be converted before doing arithmetic operations on it.

Bonus Q: What happens when arithmetic operations are conducted on a long integer?

like image 290
dayuloli Avatar asked Jun 23 '14 16:06

dayuloli


People also ask

What determines the size of int?

"The sizes of short, int, and long in C/C++ are dependent upon the implementation of the language; dependent on data model, even short can be anything from 16-bit to 64-bit. For some common platforms: On older, 16-bit operating systems, int was 16-bit and long was 32-bit.

What is size of int how it is defined?

The size of a signed int or unsigned int item is the standard size of an integer on a particular machine. For example, in 16-bit operating systems, the int type is usually 16 bits, or 2 bytes. In 32-bit operating systems, the int type is usually 32 bits, or 4 bytes.

Why sizeof int is 4?

On a 32-bit Machine, sizeof(int*) will return a value 4 because the address value of memory location on a 32-bit machine is 4-byte integers. Similarly, on a 64-bit machine it will return a value of 8 as on a 64-bit machine the address of a memory location are 8-byte integers.

What is the size of int in bytes in C?

The size of int is usually 4 bytes (32 bits). And, it can take 232 distinct states from -2147483648 to 2147483647 .


1 Answers

Generally speaking, each computer architecture is designed such that certain type sizes provide the most efficient numerical operations. The specific size then depends on the architecture, and the compiler will select an appropriate size. More detailed explanations as to why hardware designers selected certain sizes for perticular hardware would be out of scope for stckoverflow.

A short most be promoted to int before performing integral operations because that's the way it was in C, and C++ inherited that behavior with little or no reason to change it, possibly breaking existing code. I'm not sure the reason it was originally added in C, but one could speculate that it's related to "default int" where if no type were specified int was assumed by the compiler.

Bonus A: from 5/9 (expressions) we learn: Many binary operators that expect operands of arithmetic or enumeration type cause conversions and yield result types in a similar way. The purpose is to yield a common type, which is also the type of the result. This pattern is called the usual arithmetic conversions, which are defined as follows:

And then of interest specifically:

  • floating point rules that don't matter here
  • Otherwise, the integral promotions (4.5) shall be performed on both operands
  • Then, if either operand is unsigned long the other shall be converted to unsigned long.
  • Otherwise, if one operand is a long int and the other unsigned int, then if a long int can represent all the values of an unsigned int, the unsigned int shall be converted to a long int; otherwise both operands shall be converted to unsigned long int.
  • Otherwise, if either operand is long, the other shall be converted to long.

In summary the compiler tries to use the "best" type it can to do binary operations, with int being the smallest size used.

like image 177
Mark B Avatar answered Oct 08 '22 02:10

Mark B