Many sources, including Microsoft, reference both the int and long type as being 4 bytes and having a range of (signed) -2,147,483,648 to 2,147,483,647. What is the point of having a long primitive type if it doesn't actually provide a larger range of values?
The fact that an int uses a fixed number of bytes (such as 4) is a compiler/CPU efficiency and limitation, designed to make common integer operations fast and efficient.
Compiler designers tend to to maximize the performance of int arithmetic, making it the natural size for the underlying processor or OS, and setting up the other types accordingly. But the use of long int , since int can be omitted, it's just the same as long by definition.
Whether it is a 32-bit Machine or 64-bit machine, sizeof(int) will always return a value 4 as the size of an integer.
Most of the textbooks say integer variables occupy 2 bytes.
The only things guaranteed about integer types are:
sizeof(char) == 1
sizeof(char) <= sizeof(short)
sizeof(short) <= sizeof(int)
sizeof(int) <= sizeof(long)
sizeof(long) <= sizeof(long long)
sizeof(char) * CHAR_BIT >= 8
sizeof(short) * CHAR_BIT >= 16
sizeof(int) * CHAR_BIT >= 16
sizeof(long) * CHAR_BIT >= 32
sizeof(long long) * CHAR_BIT >= 64
The other things are implementation defined. Thanks to (4), both long
and int
can have the same size, but it must be at least 32 bits (thanks to (9)).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With