Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is the size of C "int" 2 bytes or 4 bytes?

Tags:

c

int

byte

People also ask

Is int 2 bytes or 4 bytes in C?

In C language, the integer takes 2 bytes for a 32-bit compiler and 4 bytes for a 64-bit compiler.

Does int have 4 bytes?

On 16-bit systems (like in arduino), int takes up 2 bytes while on 32-bit systems, int takes 4 bytes since 32-bit=4bytes but even on 64-bit systems, int occupies 4 bytes.

What is the size of int bytes in C?

Most of the textbooks say integer variables occupy 2 bytes.


I know it's equal to sizeof(int). The size of an int is really compiler dependent. Back in the day, when processors were 16 bit, an int was 2 bytes. Nowadays, it's most often 4 bytes on a 32-bit as well as 64-bit systems.

Still, using sizeof(int) is the best way to get the size of an integer for the specific system the program is executed on.

EDIT: Fixed wrong statement that int is 8 bytes on most 64-bit systems. For example, it is 4 bytes on 64-bit GCC.


This is one of the points in C that can be confusing at first, but the C standard only specifies a minimum range for integer types that is guaranteed to be supported. int is guaranteed to be able to hold -32767 to 32767, which requires 16 bits. In that case, int, is 2 bytes. However, implementations are free to go beyond that minimum, as you will see that many modern compilers make int 32-bit (which also means 4 bytes pretty ubiquitously).

The reason your book says 2 bytes is most probably because it's old. At one time, this was the norm. In general, you should always use the sizeof operator if you need to find out how many bytes it is on the platform you're using.

To address this, C99 added new types where you can explicitly ask for a certain sized integer, for example int16_t or int32_t. Prior to that, there was no universal way to get an integer of a specific width (although most platforms provided similar types on a per-platform basis).


There's no specific answer. It depends on the platform. It is implementation-defined. It can be 2, 4 or something else.

The idea behind int was that it was supposed to match the natural "word" size on the given platform: 16 bit on 16-bit platforms, 32 bit on 32-bit platforms, 64 bit on 64-bit platforms, you get the idea. However, for backward compatibility purposes some compilers prefer to stick to 32-bit int even on 64-bit platforms.

The time of 2-byte int is long gone though (16-bit platforms?) unless you are using some embedded platform with 16-bit word size. Your textbooks are probably very old.


The answer to this question depends on which platform you are using.
But irrespective of platform, you can reliably assume the following types:

 [8-bit] signed char: -127 to 127
 [8-bit] unsigned char: 0 to 255
 [16-bit]signed short: -32767 to 32767
 [16-bit]unsigned short: 0 to 65535
 [32-bit]signed long: -2147483647 to 2147483647
 [32-bit]unsigned long: 0 to 4294967295
 [64-bit]signed long long: -9223372036854775807 to 9223372036854775807
 [64-bit]unsigned long long: 0 to 18446744073709551615