Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

integer size in c depends on what?

Tags:

c

Size of the integer depends on what?

Is the size of an int variable in C dependent on the machine or the compiler?

like image 476
shubhangi Motling Avatar asked Mar 10 '11 05:03

shubhangi Motling


People also ask

What does size of int depend on?

Yes, it depends on both processors (more specifically, ISA, instruction set architecture, e.g., x86 and x86-64) and compilers including programming model. For example, in 16-bit machines, sizeof (int) was 2 bytes. 32-bit machines have 4 bytes for int .

How is int size calculated?

2^(n-1) is the formula to find the maximum of an INT data type. In the preceding formula N (Size in bits) is the size of data type. The ^ operator calculates the power of the value. The range of an int data type is -2,147,483,648 to 2,147,483,647.

Why size of int is 4 bytes in C?

The fact that an int uses a fixed number of bytes (such as 4) is a compiler/CPU efficiency and limitation, designed to make common integer operations fast and efficient.

Does size of int vary?

The size of a signed int or unsigned int item is the standard size of an integer on a particular machine. For example, in 16-bit operating systems, the int type is usually 16 bits, or 2 bytes. In 32-bit operating systems, the int type is usually 32 bits, or 4 bytes.


2 Answers

It's implementation-dependent. The C standard only requires that:

  • char has at least 8 bits
  • short has at least 16 bits
  • int has at least 16 bits
  • long has at least 32 bits
  • long long has at least 64 bits (added in 1999)
  • sizeof(char) ≤ sizeof(short) ≤ sizeof(int) ≤ sizeof(long) ≤ sizeof(long long)

In the 16/32-bit days, the de facto standard was:

  • int was the "native" integer size
  • the other types were the minimum size allowed

However, 64-bit systems generally did not make int 64 bits, which would have created the awkward situation of having three 64-bit types and no 32-bit type. Some compilers expanded long to 64 bits.

like image 195
dan04 Avatar answered Oct 04 '22 22:10

dan04


Formally, representations of all fundamental data types (including their sizes) are compiler-dependent and only compiler-dependent. The compiler (or, more properly, the implementation) can serve as an abstraction layer between the program and the machine, completely hiding the machine from the program or distorting it in any way it pleases.

But in practice compilers are designed to generate the most efficient code for given machine and/or OS. In order to achieve that the fundamental data types should have natural representation for the given machine and/or OS. In that sense, these representations are indirectly dependent on the machine and/or OS.

In other words, from the abstract, formal and pedantic point of view the compiler is free to completely ignore the data type representations specific to the machine. But it makes no practical sense. In practice compilers make full use of data type representations provided by the machine.

Still, if some data type is not supported by the machine, the compiler can still provide that data type to the programs by implementing its support at the compiler level ("emulating" it). For example, 64-bit integer types are normally available in 32-bit compilers for 32-bit machines, even though they are not directly supported by the machine. Back in the day the compilers would often provide compiler-level support for floating-point types for machines that were not equipped with floating-point co-processor (and therefore did not support floating-point types directly).

like image 44
AnT Avatar answered Oct 04 '22 21:10

AnT