Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What's the difference between "int" and "int_fast16_t"?

As I understand it, the C specification says that type int is supposed to be the most efficient type on target platform that contains at least 16 bits.

Isn't that exactly what the C99 definition of int_fast16_t is too?

Maybe they put it in there just for consistency, since the other int_fastXX_t are needed?

Update

To summarize discussion below:

  • My question was wrong in many ways. The C standard does not specify bitness for int. It gives a range [-32767,32767] that it must contain.
  • I realize at first most people would say, "but that range implies at least 16-bits!" But C doesn't require two's-compliment storage of integers. If they had said "16-bit", there may be some platforms that have 1-bit parity, 1-bit sign, and 14-bit magnitude that would still being "meeting the standard", but not satisfy that range.
  • The standard does not say anything about int being the most efficient type. Aside from size requirements above, int can be decided by the compiler developer based on whatever criteria they deem most important. (speed, size, backward compatibility, etc)
  • On the other hand, int_fast16_t is like providing a hint to the compiler that it should use a type that is optimum for performance, possibly at the expense of any other tradeoff.
  • Likewise, int_least16_t would tell the compiler to use the smallest type that's >= 16-bits, even if it would be slower. Good for preserving space in large arrays and stuff.

Example: MSVC on x86-64 has a 32-bit int, even on 64-bit systems. MS chose to do this because too many people assumed int would always be exactly 32-bits, and so a lot of ABIs would break. However, it's possible that int_fast32_t would be a 64-bit number if 64-bit values were faster on x86-64. (Which I don't think is actually the case, but it just demonstrates the point)

like image 271
something_clever Avatar asked Jun 19 '15 15:06

something_clever


People also ask

What is int_fast16_t?

int_fast16_t is most efficient type in speed with at least the range of a 16 bit int. Example: A given platform may have decided that int should be 32-bit for many reasons, not only speed. The same system may find a different type is fastest for 16-bit integers.

What is the difference between int and uint16_t?

int is usually (but may not be) a 32-bit signed integer, while uint16_t is guaranteed to be an unsigned 16-bit integer.

Are int and int32_t the same?

Plain int is quite a bit different from the others. Where int8_t and int32_t each have a specified size, int can be any size >= 16 bits. At different times, both 16 bits and 32 bits have been reasonably common (and for a 64-bit implementation, it should probably be 64 bits).

Why is int and long int the same?

Compiler designers tend to to maximize the performance of int arithmetic, making it the natural size for the underlying processor or OS, and setting up the other types accordingly. But the use of long int , since int can be omitted, it's just the same as long by definition.


1 Answers

int is a "most efficient type" in speed/size - but that is not specified by per the C spec. It must be 16 or more bits.

int_fast16_t is most efficient type in speed with at least the range of a 16 bit int.

Example: A given platform may have decided that int should be 32-bit for many reasons, not only speed. The same system may find a different type is fastest for 16-bit integers.

Example: In a 64-bit machine, where one would expect to have int as 64-bit, a compiler may use a mode with 32-bit int compilation for compatibility. In this mode, int_fast16_t could be 64-bit as that is natively the fastest width for it avoids alignment issues, etc.

like image 54
chux - Reinstate Monica Avatar answered Oct 16 '22 09:10

chux - Reinstate Monica