Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there any performance difference in using int versus int8_t

My main question is, Is there any difference between int and int8_t for execution time ?

In a framework I am working on, I often read code where some paramteres are set as int8_t in function because "that particular parameter cannot be outside the -126,125 range".

In many places, int8_t is used for communication protocol, or to cut a packet into many fields into a __attribute((packed)) struct.

But at some point, it was mainly put there because someone thought it would be better to use a type that match more closely the size of the data, probably think ahead of the compiler.

Given that the code is made to run on Linux, compiled with gcc using glibc, and that memory or portability is not an issue, I am wondering if it is actually a good idea, performance-wise.

My first impression comes from the rule "Trying to be smarter than the compiler is always a bad idea" (unless you know where and how you need to optimize).

However, I do not know if using int8_t is actually a cost for performance (more testing and computation to match the int8_t size, more operations are needed to ensure the variable do not go out of bounds, etc.), or if it does improve performance in some way.

I am not good at reading simple asm, so I did not compile a test code into asm to try to know which one is better.

I tried to find a related question, but all discussion I found on int<size>_t versus int is about portability rather than performance.

Thanks for your input. Assembly samples explained or sources about this issue would be greatly appreciated.

like image 631
DainDwarf Avatar asked Feb 04 '15 09:02

DainDwarf


People also ask

Is short faster than int?

The integer promotions mean that no operation (addition, bitwise, logical etc etc) in the language can occur on a smaller integer type than int. Thus, operations on char/short/int are generally equally fast, as the former ones are promoted to the latter.

What's the difference between int and int16_t?

If int = 16 bit then obviously there is no difference between int and int16_t. So since int cannot be less than 16 bit, we may assume that int is more than 16 bit. Being more than 16 bit makes it more useful than int16_t. After all, it can hold more values.

What is difference between int and int32_t?

In C and C++, int has at least 16 bits. Usually, on common 32-bit and 64-bit architectures, it has 32 bits. The language standards permit it to have any size greater or equal to 16 bits. On the other hand, int32_t has exactly 32 bits.

What does int8_t mean?

Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.


Video Answer


1 Answers

int is generally equivalent of the size of register on CPU. C standard says that any smaller types must be converted to int before using operators on them.

These conversions (sign extension) can be costly.

int8_t a=1, b=2, c=3;
 ...
a = b + c; // This will translate to: a = (int8_t)((int)b + (int)c);

If you need speed, int is a safe bet, or use int_fast8_t (even safer). If exact size is important, use int8_t (if available).

like image 192
user694733 Avatar answered Oct 07 '22 00:10

user694733