Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

To size your (u)ints or not? [closed]

I googled for this, and was surprised to find no guidelines, rules of thumb, styles, etc. When declaring a (signed or not signed) integer in C, one can make the choice to just use whatever the processor defines for int, or one can specify the width (e.g. uint16_t, int8_t, uint32_t, etc).

When doing desktop/dedicated C programs, I've tended very much towards the "just use the defaults" unless it was really important for me to specify width (e.g. "this is a 32 bit ID").

Having done more microcontroller work lately (pic18 and AVR), I've tended to size everything, just because you become so space conscience.

And now I'm working on some Pic32 code (no OS), where I find myself torn between the two extremes.

I'm curious what rubric (if any) people have formulated that help them decide when to size their ints, and when to use the defaults? And why?

like image 330
Travis Griggs Avatar asked Jul 16 '13 17:07

Travis Griggs


2 Answers

If something is important to you, try to make it as explicit as possible.
If you don't really care, let the compiler decide.

This is quite close to what you wrote yourself. If you must follow a specification, that says something is 32bit, use a sized type. If it's just a loop counter, use int.

like image 54
ugoren Avatar answered Nov 11 '22 13:11

ugoren


There actually is a guideline that mentions this. MISRA C has a rule that says you should always use sized types. But the rule is only advisory, not required or mandatory for compliance.

From here:

6.3 (adv): 'typedefs' that indicate size and signedness should be used in place of the basic types.

like image 30
embedded.kyle Avatar answered Nov 11 '22 13:11

embedded.kyle