Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between int32, int, int32_t, int8 and int8_t

Tags:

c

int

declaration

People also ask

What does int8_t mean?

Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.

What does int32_t mean?

That's where int32_t comes in: it's an alias for whatever integer type your particular system has that is exactly 32 bits. Template: intN_t or uintN_t Where N is width of integer which can be 8, 16, 32, 64 or any other type width supported by the library.

How many bits is int8_t?

Integer Range This means that an int8_t variable will use 8 bits, its values from (-2**7) to (2**7 - 1), or -128 to 127. An uint8_t is unsigned and it uses 8 bits, so its range is from 0 to (2**8 - 1), or 0 to 255.


Between int32 and int32_t, (and likewise between int8 and int8_t) the difference is pretty simple: the C standard defines int8_t and int32_t, but does not define anything named int8 or int32 -- the latter (if they exist at all) is probably from some other header or library (most likely predates the addition of int8_t and int32_t in C99).

Plain int is quite a bit different from the others. Where int8_t and int32_t each have a specified size, int can be any size >= 16 bits. At different times, both 16 bits and 32 bits have been reasonably common (and for a 64-bit implementation, it should probably be 64 bits).

On the other hand, int is guaranteed to be present in every implementation of C, where int8_t and int32_t are not. It's probably open to question whether this matters to you though. If you use C on small embedded systems and/or older compilers, it may be a problem. If you use it primarily with a modern compiler on desktop/server machines, it probably won't be.

Oops -- missed the part about char. You'd use int8_t instead of char if (and only if) you want an integer type guaranteed to be exactly 8 bits in size. If you want to store characters, you probably want to use char instead. Its size can vary (in terms of number of bits) but it's guaranteed to be exactly one byte. One slight oddity though: there's no guarantee about whether a plain char is signed or unsigned (and many compilers can make it either one, depending on a compile-time flag). If you need to ensure its being either signed or unsigned, you need to specify that explicitly.


The _t data types are typedef types in the stdint.h header, while int is an in built fundamental data type. This make the _t available only if stdint.h exists. int on the other hand is guaranteed to exist.


Always keep in mind that 'size' is variable if not explicitly specified so if you declare

 int i = 10;

On some systems it may result in 16-bit integer by compiler and on some others it may result in 32-bit integer (or 64-bit integer on newer systems).

In embedded environments this may end up in weird results (especially while handling memory mapped I/O or may be consider a simple array situation), so it is highly recommended to specify fixed size variables. In legacy systems you may come across

 typedef short INT16;
 typedef int INT32;
 typedef long INT64; 

Starting from C99, the designers added stdint.h header file that essentially leverages similar typedefs.

On a windows based system, you may see entries in stdin.h header file as

 typedef signed char       int8_t;
 typedef signed short      int16_t;
 typedef signed int        int32_t;
 typedef unsigned char     uint8_t;

There is quite more to that like minimum width integer or exact width integer types, I think it is not a bad thing to explore stdint.h for a better understanding.