Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the difference between an int and a long in C++?

Tags:

c++

variables

It is implementation dependent.

For example, under Windows they are the same, but for example on Alpha systems a long was 64 bits whereas an int was 32 bits. This article covers the rules for the Intel C++ compiler on variable platforms. To summarize:

  OS           arch           size
Windows       IA-32        4 bytes
Windows       Intel 64     4 bytes
Windows       IA-64        4 bytes
Linux         IA-32        4 bytes
Linux         Intel 64     8 bytes
Linux         IA-64        8 bytes
Mac OS X      IA-32        4 bytes
Mac OS X      Intel 64     8 bytes  

The only guarantee you have are:

sizeof(char) == 1
sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)

// FROM @KTC. The C++ standard also has:
sizeof(signed char)   == 1
sizeof(unsigned char) == 1

// NOTE: These size are not specified explicitly in the standard.
//       They are implied by the minimum/maximum values that MUST be supported
//       for the type. These limits are defined in limits.h
sizeof(short)     * CHAR_BIT >= 16
sizeof(int)       * CHAR_BIT >= 16
sizeof(long)      * CHAR_BIT >= 32
sizeof(long long) * CHAR_BIT >= 64
CHAR_BIT         >= 8   // Number of bits in a byte

Also see: Is long guaranteed to be at least 32 bits?


When compiling for x64, the difference between int and long is somewhere between 0 and 4 bytes, depending on what compiler you use.

GCC uses the LP64 model, which means that ints are 32-bits but longs are 64-bits under 64-bit mode.

MSVC for example uses the LLP64 model, which means both ints and longs are 32-bits even in 64-bit mode.


The C++ specification itself (old version but good enough for this) leaves this open.

There are four signed integer types: 'signed char', 'short int', 'int', and 'long int'. In this list, each type provides at least as much storage as those preceding it in the list. Plain ints have the natural size suggested by the architecture of the execution environment* ;

[Footnote: that is, large enough to contain any value in the range of INT_MIN and INT_MAX, as defined in the header <climits>. --- end foonote]


As Kevin Haines points out, ints have the natural size suggested by the execution environment, which has to fit within INT_MIN and INT_MAX.

The C89 standard states that UINT_MAX should be at least 2^16-1, USHRT_MAX 2^16-1 and ULONG_MAX 2^32-1 . That makes a bit-count of at least 16 for short and int, and 32 for long. For char it states explicitly that it should have at least 8 bits (CHAR_BIT). C++ inherits those rules for the limits.h file, so in C++ we have the same fundamental requirements for those values. You should however not derive from that that int is at least 2 byte. Theoretically, char, int and long could all be 1 byte, in which case CHAR_BIT must be at least 32. Just remember that "byte" is always the size of a char, so if char is bigger, a byte is not only 8 bits any more.


It depends on your compiler. You are guaranteed that a long will be at least as large as an int, but you are not guaranteed that it will be any longer.


For the most part, the number of bytes and range of values is determined by the CPU's architecture not by C++. However, C++ sets minimum requirements, which litb explained properly and Martin York only made a few mistakes with.

The reason why you can't use int and long interchangeably is because they aren't always the same length. C was invented on a PDP-11 where a byte had 8 bits, int was two bytes and could be handled directly by hardware instructions. Since C programmers often needed four-byte arithmetic, long was invented and it was four bytes, handled by library functions. Other machines had different specifications. The C standard imposed some minimum requirements.