Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Max value of INT in 64 bit computer

Tags:

c

My system is Linux x86_64. The max value of int datatype (INT_MAX) is shown to be 2147483647 (using <limits.h> header) and not 32767 (as is the case for 16 bit int). Why?

like image 239
RThakur Avatar asked Nov 27 '22 20:11

RThakur


2 Answers

There is no type INT unless you define it yourself. The type is called int. C is case-sensitive.

The C standard says that an object of type int "has the natural size suggested by the architecture of the execution environment". For a 64-bit system, that does tend to imply that INT_MAX should be 263-1 -- but it's not a hard requirement.

The requirement is that int must be at least 16 bits wide, and that it must be at least as wide as short and no wider than long. (POSIX requires int to be at least 32 bits.)

It's useful to have integer types for all the sizes supported by the system. In particular, on most modern systems, it's useful to have predefined integer types of size 8, 16, 32, and 64 bits.

char is typically 8 bits. If we make int 64 bits, then either short is 16 bits and we have no 32-bit type, or short is 32 bits and we have no 16-bit type. (I've also worked on systems where there is no 16-bit or 32-bit integer type.)

A compiler could address this by defining its own extended integer types, but compilers typically don't do so.

In practice, making int 32 bits on 64-bit systems isn't a real problem. Operations on 32-bit integers are efficient, and if you want a 64-bit integer you can use long, long long, or int64_t, defined in <stdint.h>. (Actually 64-bit Windows defines long as 32 bits, but long long is always at least 64 bits.)

Bottom line: The C standard permits a fair amount of flexibility in how int is defined, and compiler implementers define it in the way they think will be most convenient to their users -- or, more commonly, in the way required by the platform's ABI. Compatibility with code written for 32-bit systems is often a major consideration.

like image 200
Keith Thompson Avatar answered Dec 16 '22 22:12

Keith Thompson


The sizes of various integer types aren't guaranteed. The only thing you can count on is:

sizeof(long long) >= sizeof(long) >= sizeof(int) >= sizeof(short) >= sizeof(char) == 1

That said, in most real systems nowadays, you have:

sizeof(long long) == 8
sizeof(long)      == 8 or 4, depending on the architecture and compiler
sizeof(int)       == 4
sizeof(short)     == 2
sizeof(char)      == 1

int hasn't been 16bit in a long time. And it is very rarely 64bit. Usually long is the same size as the architecture, except on MSVC.

You can also get their min/max values using INT_MIN/INT_MAX and similar constants in limits.h

like image 41
mtijanic Avatar answered Dec 17 '22 00:12

mtijanic