Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is int typically 32 bit on 64 bit compilers?

People also ask

Why does int size depend on compiler?

Data Types Size depends on Processor, because compiler wants to make CPU easier accessible the next byte. for eg: if processor is 32bit, compiler may not choose int size as 2 bytes[which it supposed to choose 4 bytes] because accessing another 2 bytes of that int(4bytes) will take additional CPU cycle which is waste.

Are integers 32-bit or 64-bit?

int is 32 bits in size. long , ptr , and off_t are all 64 bits (8 bytes) in size.

Why is an integer 32-bit?

A 32-bit signed integer is an integer whose value is represented in 32 bits (i.e. 4 bytes). Bits are binary, meaning they may only be a zero or a one. Thus, the 32-bit signed integer is a string of 32 zeros and ones. The signed part of the integer refers to its ability to represent both positive and negative values.

Is int always 32-bit?

int is always 32 bits wide. sizeof(T) represents the number of 8-bit bytes (octets) needed to store a variable of type T . (This is false because if say char is 32 bits, then sizeof(T) measures in 32-bit words.) We can use int everywhere in a program and ignore nuanced types like size_t , uint32_t , etc.


Bad choices on the part of the implementors?

Seriously, according to the standard, "Plain ints have the natural size suggested by the architecture of the execution environment", which does mean a 64 bit int on a 64 bit machine. One could easily argue that anything else is non-conformant. But in practice, the issues are more complex: switching from 32 bit int to 64 bit int would not allow most programs to handle large data sets or whatever (unlike the switch from 16 bits to 32); most programs are probably constrained by other considerations. And it would increase the size of the data sets, and thus reduce locality and slow the program down.

Finally (and probably most importantly), if int were 64 bits, short would have to be either 16 bits or 32 bits, and you'ld have no way of specifying the other (except with the typedefs in <stdint.h>, and the intent is that these should only be used in very exceptional circumstances). I suspect that this was the major motivation.


The history, trade-offs and decisions are explained by The Open Group at http://www.unix.org/whitepapers/64bit.html. It covers the various data models, their strengths and weaknesses and the changes made to the Unix specifications to accommodate 64-bit computing.


ints have been 32 bits on most major architectures for so long that changing them to 64 bits will probably cause more problems than it solves.


Because there is no advantage to a lot of software to have 64-bit integers.

Using 64-bit int's to calculate things that can be calculated in a 32-bit integer (and for many purposes, values up to 4 billion (or +/- 2 billon) are sufficient), and making them bigger will not help anything.

Using a bigger integer will however have a negative effect on how many integers sized "things" fit in the cache on the processor. So making them bigger will make calculations that involve large numbers of integers (e.g. arrays) take longer because.

The int is the natural size of the machine-word isn't something stipulated by the C++ standard. In the days when most machines where 16 or 32 bit, it made sense to make it either 16 or 32 bits, because that is a very efficient size for those machines. When it comes to 64 bit machines, that no longer "helps". So staying with 32 bit int makes more sense.

Edit: Interestingly, when Microsoft moved to 64-bit, they didn't even make long 64-bit, because it would break too many things that relied on long being a 32-bit value (or more importantly, they had a bunch of things that relied on long being a 32-bit value in their API, where sometimes client software uses int and sometimes long, and they didn't want that to break).