From ISO/IEC 9899:
7.18.1.2 Minimum-width integer types
1 The typedef name int_leastN_t designates a signed integer type with a width of at least N, such that no signed integer type with lesser size has at least the specified width. Thus, int_least32_t denotes a signed integer type with a width of at least 32 bits.
Why I should ever use this types?
When I'm deciding what type I should take for a variable I need, then I ask my self: "What will be the biggest value it could ever be carrying?"
So I'm going to find an answer, check what's the lowest 2n which is bigger than that, and take the matching exact integer type.
So in this case I could use also a minimum-width integer type. But why? As I already know: it will never be a greater value. So why take something that could sometimes cover even more as I need?
All other cases I can imagin of where even invalid as i.e.:
"I have a type that will be at least size of..." - The implmentation can't know what will be the largest (for example) user input I will ever get, so adjusting the type at compile time won't help.
"I have a variable where I can't determine what size of values it will be holding on run time."
-So how the compiler can know at compile time? -> It can't find the fitting byte size, too.
So what is the usage of these types?
h is a header file in the C standard library introduced in the C99 standard library section 7.18 to allow programmers to write more portable code by providing a set of typedefs that specify exact-width integer types, together with the defined minimum and maximum allowable values for each type, using macros .
The fixed-width integer types that <inttypes. h> provides, include signed integer types, such as int8_t, int16_t, int32_t, int64_t, and unsigned integer types, such as uint8_t, uint16_t, uint32_t, and uint64_t.
In C#, UInt32 struct is used to represent 32-bit unsigned integers(also termed as uint data type) starting from range 0 to 4,294,967,295.
The UInt64 value type represents unsigned integers with values ranging from 0 to 18,446,744,073,709,551,615.
So why take something that could sometimes cover even more as I need?
Because there might not always be the size you need. For example, on system where CHAR_BIT > 8
, int8_t
is not available, but int_least8_t
is.
Idea is not that compiler will guess how much bits you need. Idea is that compiler will always have type available which will satisfy your size requirement, even if it cannot offer exact size type.
Because your compiler knows best what is good for you. For example, on some CPU architectures, computations involving 8 or 16 bit types might be much slower than computations done in 32 bits due to extra instructions for masking operands and results to match their width.
The C implementation on a Cray Unicos for example has only an 8 bit char type, everything else (short, int, long, long long) is 64 bit. If you force a type to be int16_t
or int32_t
performance can suffer drastically due to the narrow stores requiring masking, oring and anding. Using int_least32_t
would allow the compiler to use the native 64 bit type.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With