Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between int32_t and int_fast32_t [duplicate]

Tags:

c

types

int32

What is the difference between the two? I know that int32_t is exactly 32 bits regardless of the environment but, as its name suggests that it's fast, how much faster can int_fast32_t really be compared to int32_t? And if it's significantly faster, then why so?

like image 458
starcodex Avatar asked Apr 23 '13 08:04

starcodex


People also ask

What does int32_t mean?

That's where int32_t comes in: it's an alias for whatever integer type your particular system has that is exactly 32 bits. Template: intN_t or uintN_t Where N is width of integer which can be 8, 16, 32, 64 or any other type width supported by the library.

What is the difference between int32_t and int?

In C and C++, int has at least 16 bits. Usually, on common 32-bit and 64-bit architectures, it has 32 bits. The language standards permit it to have any size greater or equal to 16 bits. On the other hand, int32_t has exactly 32 bits.

What's the difference between int and int16_t?

If int = 16 bit then obviously there is no difference between int and int16_t. So since int cannot be less than 16 bit, we may assume that int is more than 16 bit. Being more than 16 bit makes it more useful than int16_t. After all, it can hold more values.

What is int_fast32_t?

int_fast32_t is the "fastest" integer for your current processor that is at last bigger or equal to an int32_t .


2 Answers

C is specified in terms of an idealized, abstract machine. But real-world hardware has behavioural characteristics that are not captured by the language standard. The _fast types are type aliases that allow each platform to specify types which are "convenient" for the hardware.

For example, if you had an array of 8-bit integers and wanted to mutate each one individually, this would be rather inefficient on contemporary desktop machines, because their load operations usually want to fill an entire processor register, which is either 32 or 64 bit wide (a "machine word"). So lots of loaded data ends up wasted, and more importantly, you cannot parallelize the loading and storing of two adjacent array elements, because they live in the same machine word and thus need to be load-modify-stored sequentially.

The _fast types are usually as wide as a machine word, if that's feasible. That is, they may be wider than you need and thus consume more memory (and thus are harder to cache!), but your hardware may be able to access them faster. It all depends on the usage pattern, though. (E.g. an array of int_fast8_t would probably be an array of machine words, and a tight loop modifying such an array may well benefit significantly.)

The only way to find out whether it makes any difference is to compare!

like image 156
Kerrek SB Avatar answered Sep 30 '22 16:09

Kerrek SB


int32_t is an integer which is exactly 32bits. It is useful if you want for example to create a struct with an exact memory placement.

int_fast32_t is the "fastest" integer for your current processor that is at last bigger or equal to an int32_t. I don't know if there is really a gain for current processors (x86 or ARM)

But I can at last outline a real case : I used to work with a 32bits PowerPC processor. When accessing misaligned 16bits int16_t, it was inefficient for it has to first realign them in one of its 32bits registers. For non memory-mapped data, since we didn't have memory restrictions, it was more efficient to use int_fast16_t (which were in fact 32bits int).

like image 35
Offirmo Avatar answered Sep 30 '22 17:09

Offirmo