If you need a counting variable, surely there must be an upper and a lower limit that your integer must support. So why wouldn't you specify those limits by choosing an appropriate (u)int_fastxx_t data type?
The simplest reason is that people are more used to int
than the additional types introduced in C++11, and that it's the language's "default" integral type (so much as C++ has one); the standard specifies, in [basic.fundamental/2]
that:
Plain ints have the natural size suggested by the architecture of the execution environment46; the other signed integer types are provided to meet special needs.
46) that is, large enough to contain any value in the range of
INT_MIN
andINT_MAX
, as defined in the header<climits>
.
Thus, whenever a generic integer is needed, which isn't required to have a specific range or size, programmers tend to just use int
. While using other types can communicate intent more clearly (for example, using int8_t
indicates that the value should never exceed 127
), using int
also communicates that these details aren't crucial to the task at hand, while simultaneously providing a little leeway to catch values that exceed your required range (if a system handles signed overflow with modulo arithmetic, for example, an int8_t
would treat 313
as 57
, making the invalid value harder to troubleshoot); typically, in modern programming, it either indicates that the value can be represented within the system's word size (which int
is supposed to represent), or that the value can be represented within 32 bits (which is nearly always the size of int
on x86 and x64 platforms).
Sized types also have the issue that the (theoretically) most well-known ones, the intX_t
line, are only defined on platforms which support sizes of exactly X bits. While the int_leastX_t
types are guaranteed to be defined on all platforms, and guaranteed to be at least X bits, a lot of people wouldn't want to type that much if they don't have to, since it adds up when you need to specify types often. [You can't use auto
, either because it detects integer literals as int
s. This can be mitigated by making user-defined literal operators, but that still takes more time to type.] Thus, they'll typically use int
if it's safe to do so.
Or in short, int
is intended to be the go-to type for normal operation, with the other types intended to be used in extranormal circumstances. Many programmers stick to this mindset out of habit, and only use sized types when they explicitly require specific ranges and/or sizes. This also communicates intent relatively well; int
means "number", and intX_t
means "number that always fits in X bits".
It doesn't help that int
has evolved to unofficially mean "32-bit integer", due to both 32- and 64-bit platforms usually using 32-bit int
s. It's very likely that many programmers expect int
to always be at least 32 bits in the modern age, to the point where it can very easily bite them in the rear if they have to program for platforms that don't support 32-bit int
s.
Conversely, the sized types are typically used when a specific range or size is explicitly required, such as when defining a struct
that needs to have the same layout on systems with different data models. They can also prove useful when working with limited memory, using the smallest type that can fully contain the required range.
A struct
intended to have the same layout on 16- and 32-bit systems, for example, would use either int16_t
or int32_t
instead of int
, because int
is 16 bits in most 16-bit data models and the LP32 32-bit data model (used by the Win16 API and Apple Macintoshes), but 32 bits in the ILP32 32-bit data model (used by the Win32 API and *nix systems, effectively making it the de facto "standard" 32-bit model).
Similarly, a struct
intended to have the same layout on 32- and 64-bit systems would use int
/int32_t
or long long
/int64_t
over long
, due to long
having different sizes in different models (64 bits in LP64 (used by 64-bit *nix), 32 bits in LLP64 (used by Win64 API) and the 32-bit models).
Note that there is also a third 64-bit model, ILP64, where int
is 64 bits; this model is very rarely used (to my knowledge, it was only used on early 64-bit Unix systems), but would mandate the use of a sized type over int
if layout compatibility with ILP64 platforms is required.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With