In C++11 we are provided with fixed-width integer types, such as std::int32_t
and std::int64_t
, which are optional and therefore not optimal for writing cross-platform code. However, we also got non-optional variants for the types: e.g. the "fast" variants, e.g. std::int_fast32_t
and std::int_fast64_t
, as well as the "smallest-size" variants, e.g. std::int_least32_t
, which both are at least the specified number of bits in size.
The code I am working on is part of a C++11-based cross-platform library, which supports compilation on the most popular Unix/Windows/Mac compilers. A question that now came up is if there is an advantage in replacing the existing integer types in the code by the C++11 fixed-width integer types.
A disadvantage of using variables like std::int16_t
and std::int32_t
is the lack of a guarantee that they are available, since they are only provided if the implementation directly supports the type (according to http://en.cppreference.com/w/cpp/types/integer).
However, since int
is at least 16 bits and 16-bit are large enough for the integers used in the code, what about the usage of std::int_fast16_t
over int? Does it provide a benefit to replace all int
types by std::int_fast16_t
and all unsigned int
's by std::uint_fast16_t
in that way or is this unnecessary?
Anologously, if knowing that all supported platforms and compilers feature an int
of at least 32 bits size, does it make sense to replace them by std::int_fast32_t
and std::uint_fast32_t
respectively?
int
can be 16, 32 or even 64 bit on current computers and compilers. In the future, it could be bigger (say, 128 bits).
If your code is ok with that, go with it.
If your code is only tested and working with 32 bit ints, then consider using int32_t
. Then the code will fail at compile time instead of at run time when run on a system that doesn't have 32 bit ints (which is extremely rare today).
int_fast32_t
is when you need at least 32 bits, but you care a lot about performance. On hardware that a 32 bit integer is loaded as a 64 bit integer, then bitshifted back down to a 32 bit integer in a cumbersome process, the int_fast_32_t
may be a 64 bit integer. The cost of this is that on obscure platforms, your code behaves very differently.
If you are not testing on such platforms, I would advise against it.
Having things break at build time is usually better than having breaks at run time. If and when your code is actually run on some obscure processor needing these features, then fix it. The rule of "you probably won't need it" applies.
Be conservative, generate early errors on hardware you are not tested on, and when you need to port to said hardware do the work and testing required to be reliable.
In short:
Use int_fast##_t
if and only if you have tested your code (and will continue to test it) on platforms where the int size varies, and you have shown that the performance improvement is worth that future maintenance.
Using int##_t
with common ##
sizes means that your code will fail to compile on platforms that you have not tested it on. This is good; untested code is not reliable, and unreliable code is usually worse than useless.
Without using int32_t
, and using int
, your code will sometimes have int
s that are 32 and sometimes ints that are 64 (and in theory more), and sometimes int
s that are 16. If you are willing to test and support every such case in every such int
, go for it.
Note that arrays of int_fast##_t
can have cache problems: they could be unreasonably big. As an example, int_fast16_t
could be 64 bits. An array of a few thousand or million of them could be individually fast to work with, but the cache misses caused by their bulk could make them slower overall; and the risk that things get swapped out to slower storage grows.
int_least##_t
can be faster in those cases.
The same applies, doubly so, to network-transmitted and file-stored data, on top of the obvious issue that network/file data usually has to follow formats that are stable over compiler/hardware changes. This, however, is a different question.
However, when using fixed width integer types you must pay special attention to the fact that int, long, etc. still have the same width as before. Integer promotion still happens based on the size of int, which depends on the compiler you are using. An integral number in your code will be of type int, with the associated width. This can lead to unwanted behaviour if you compile your code using a different compiler. For more detailed info: https://stackoverflow.com/a/13424208/3144964
I have just realised that the OP is just asking about int_fast##_t
not int##_t
since the later is optional. However, I will keep the answer hopping it may help someone.
I would add something. Fixed size integers are so important (or even a must) for building APIs for other languages. One example is when when you want to pInvoke
functions and pass data to them in a native C++ DLL from a .NET managed code for example. In .NET, int
is guaranteed to be a fixed size (I think it is 32bit). So, if you used int
in C++ and it was considered as 64-bit rather than 32bit, this may cause problems and cuts down the sequence of wrapped structs.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With