Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Smart typedefs

I've always used typedef in embedded programming to avoid common mistakes:

int8_t - 8 bit signed integer
int16_t - 16 bit signed integer
int32_t - 32 bit signed integer
uint8_t - 8 bit unsigned integer
uint16_t - 16 bit unsigned integer
uint32_t - 32 bit unsigned integer

The recent embedded muse (issue 177, not on the website yet) introduced me to the idea that it's useful to have some performance specific typedefs. This standard suggests having typedefs that indicate you want the fastest type that has a minimum size.

For instance, one might declare a variable using int_fast16_t, but it would actually be implemented as an int32_t on a 32 bit processor, or int64_t on a 64 bit processor as those would be the fastest types of at least 16 bits on those platforms. On an 8 bit processor it would be int16_t bits to meet the minimum size requirement.

Having never seen this usage before I wanted to know

  • Have you seen this in any projects, embedded or otherwise?
  • Any possible reasons to avoid this sort of optimization in typedefs?
like image 550
Adam Davis Avatar asked Mar 30 '09 15:03

Adam Davis


Video Answer


3 Answers

For instance, one might declare a variable using int_fast16_t, but it would actually be implemented as an int32_t on a 32 bit processor, or int64_t on a 64 bit processor as those would be the fastest types of at least 16 bits on those platforms

That's what int is for, isn't it? Are you likely to encounter an 8-bit CPU any time soon, where that wouldn't suffice?

How many unique datatypes are you able to remember?

Does it provide so much additional benefit that it's worth effectively doubling the number of types to consider whenever I create a simple integer variable?

I'm having a hard time even imagining the possibility that it might be used consistently.

Someone is going to write a function which returns a int16fast_t, and then someone else is going to come along and store that variable into an int16_t.

Which means that in the obscure case where the fast variants are actually beneficial, it may change the behavior of your code. It may even cause compiler errors or warnings.

like image 77
jalf Avatar answered Sep 28 '22 08:09

jalf


Check out stdint.h from C99.

like image 23
rlbond Avatar answered Sep 28 '22 08:09

rlbond


The main reason I would avoid this typedef is that it allows the type to lie to the user. Take int16_t vs int_fast16_t. Both type names encode the size of the value into the name. This is not an uncommon practice in C/C++. I personally use the size specific typedefs to avoid confusion for myself and other people reading my code. Much of our code has to run on both 32 and 64 bit platforms and many people don't know the various sizing rules between the platforms. Types like int32_t eliminate the ambiguity.

If I had not read the 4th paragraph of your question and instead just saw the type name, I would have assumed it was some scenario specific way of having a fast 16 bit value. And I obviously would have been wrong :(. For me it would violate the "don't surprise people" rule of programming.

Perhaps if it had another distinguishing verb, letter, acronym in the name it would be less likely to confuse users. Maybe int_fast16min_t ?

like image 29
JaredPar Avatar answered Sep 28 '22 06:09

JaredPar