Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I ask for "at least" a size of an int in C

The situation:

I have an application written in C which is resource intensive, and designed to be portable. I want to allow the compiler to select the fastest int size for the architecture, provided it is at least 32 bits.

Is it possible to select a size of "at least" 32 bits, or will the compiler optimize these kinds of things form me automatically?

like image 254
derekdreery Avatar asked Jul 21 '14 13:07

derekdreery


People also ask

How do you define the size of an integer?

The size of a signed int or unsigned int item is the standard size of an integer on a particular machine. For example, in 16-bit operating systems, the int type is usually 16 bits, or 2 bytes. In 32-bit operating systems, the int type is usually 32 bits, or 4 bytes.

What is a Size_t in C?

size_t type is a base unsigned integer type of C and C++ language. It is the type of the result returned by sizeof operator. The type's size is chosen so that it can store the maximum size of a theoretically possible array of any type. On a 32-bit system size_t will take 32 bits, on a 64-bit one 64 bits.


2 Answers

The standard header stdint.h provides the types int_leastN_t and uint_leastN_t, where N is 8, 16, 32, and 64 (and possibly others, but these are not required). These are standard as of C99.

It also provides "fast" alternatives, aka int_fastN_t and uint_fastN_t, with the same values of N.

So, in your case, you can use int_least32_t or int_fast32_t.

like image 126
Drew McGowen Avatar answered Sep 23 '22 03:09

Drew McGowen


As others have noted, the standard include files define int_fast32_t, int_least32_t, uint_fast32_t, uint_least32_t which should likely behave as you want, but such types need to be used with extreme care. Because of integer promotion rules, there is no way for C code to avoid using types int and unsigned int. Further, integer literals may not always be of the types one expects. A comparison between an int_fast32_T and the literals 0xABCD1234 or 12345u, for example, may be performed as either signed or unsigned, depending upon whether int is 16, 32, or 64 bits. Likewise, if n is 32 bits or larger, the meaning of n &= ~0x8000; would be different on a 16-bit machine from on a larger one.

The C standard was never particularly designed to facilitate writing code which cares about integer sizes, but will nonetheless work compatibly on hardware with different sizes. Types like int_fast32_t make it easy to write code which seems like it should be portable, but may encourage complacency with respect to all of the nasty little traps hidden in the language.

like image 39
supercat Avatar answered Sep 20 '22 03:09

supercat