Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What are the minimum-width integer types useful for?

Tags:

c

c99

From ISO/IEC 9899:

7.18.1.2 Minimum-width integer types

1 The typedef name int_leastN_t designates a signed integer type with a width of at least N, such that no signed integer type with lesser size has at least the specified width. Thus, int_least32_t denotes a signed integer type with a width of at least 32 bits.

Why I should ever use this types?

When I'm deciding what type I should take for a variable I need, then I ask my self: "What will be the biggest value it could ever be carrying?"

So I'm going to find an answer, check what's the lowest 2n which is bigger than that, and take the matching exact integer type.

So in this case I could use also a minimum-width integer type. But why? As I already know: it will never be a greater value. So why take something that could sometimes cover even more as I need?

All other cases I can imagin of where even invalid as i.e.:

"I have a type that will be at least size of..." - The implmentation can't know what will be the largest (for example) user input I will ever get, so adjusting the type at compile time won't help.

"I have a variable where I can't determine what size of values it will be holding on run time."

-So how the compiler can know at compile time? -> It can't find the fitting byte size, too.

So what is the usage of these types?

like image 964
dhein Avatar asked Jan 27 '15 13:01

dhein


People also ask

What is Stdint H used for?

h is a header file in the C standard library introduced in the C99 standard library section 7.18 to allow programmers to write more portable code by providing a set of typedefs that specify exact-width integer types, together with the defined minimum and maximum allowable values for each type, using macros .

What are fixed width integer types?

The fixed-width integer types that <inttypes. h> provides, include signed integer types, such as int8_t, int16_t, int32_t, int64_t, and unsigned integer types, such as uint8_t, uint16_t, uint32_t, and uint64_t.

What is UInt32 in C?

In C#, UInt32 struct is used to represent 32-bit unsigned integers(also termed as uint data type) starting from range 0 to 4,294,967,295.

What type is uint64_t?

The UInt64 value type represents unsigned integers with values ranging from 0 to 18,446,744,073,709,551,615.


2 Answers

So why take something that could sometimes cover even more as I need?

Because there might not always be the size you need. For example, on system where CHAR_BIT > 8, int8_t is not available, but int_least8_t is.

Idea is not that compiler will guess how much bits you need. Idea is that compiler will always have type available which will satisfy your size requirement, even if it cannot offer exact size type.

like image 34
user694733 Avatar answered Sep 22 '22 02:09

user694733


Because your compiler knows best what is good for you. For example, on some CPU architectures, computations involving 8 or 16 bit types might be much slower than computations done in 32 bits due to extra instructions for masking operands and results to match their width.

The C implementation on a Cray Unicos for example has only an 8 bit char type, everything else (short, int, long, long long) is 64 bit. If you force a type to be int16_t or int32_t performance can suffer drastically due to the narrow stores requiring masking, oring and anding. Using int_least32_t would allow the compiler to use the native 64 bit type.

like image 104
Jens Avatar answered Sep 18 '22 02:09

Jens