Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

int8_t vs char ; Which is the best one?

Tags:

c++

c

char

I know both are different types (signed char and char), however my company coding guidelines specifies to use int8_t instead of char.

So, I want to know, why I have to use int8_t instead of char type. Is there any best practices to use int8_t?

like image 840
sokid Avatar asked Jul 19 '13 10:07

sokid


People also ask

What does int8_t mean?

Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.

Can we compare int8_t with uint8_t?

An integral promotion for uint8_t and int8_t is possible to int , so it is obligatorily applied. Therefore the comparison between a uint8_t and int8_t is transformed by the compiler into a comparison between 2 int . There is no undeterministic behaviour.

Is unsigned char same as uint8_t?

uint8_t always matches range and size of unsigned char and padding (none) when unsigned char is 8-bit. When unsigned char is not 8-bit, uint8_t does not exist.

Is uint8_t a character type?

uint8_t is an integer type, not a character type.


1 Answers

The use of int8_t is perfectly good for some circumstances - specifically when the type is used for calculations where a signed 8-bit value is required. Calculations involving strictly sized data [e.g. defined by external requirements to be exactly 8 bit in the result] (I used pixel colour levels in a comment above, but that really would be uint8_t, as negative pixel colours usually don't exist - except perhaps in YUV type colourspace).

The type int8_t should NOT be used as a replacement of char in for strings. This can lead to compiler errors (or warnings, but we don't really want to have to deal with warnings from the compiler either). For example:

int8_t *x = "Hello, World!\n";

printf(x);

may well compile fine on compiler A, but give errors or warnings for mixing signed and unsigned char values on compiler B. Or if int8_t isn't even using a char type. That's just like expecting

int *ptr = "Foo";

to compile in a modern compiler...

In other words, int8_t SHOULD be used instead of char if you are using 8-bit data for caclulation. It is incorrect to wholesale replace all char with int8_t, as they are far from guaranteed to be the same.

If there is a need to use char for string/text/etc, and for some reason char is too vague (it can be signed or unsigned, etc), then usign typedef char mychar; or something like that should be used. (It's probably possible to find a better name than mychar!)

Edit: I should point out that whether you agree with this or not, I think it would be rather foolish to simply walk up to whoever is in charge of this "principle" at the company, point at a post on SO and say "I think you're wrong". Try to understand what the motivation is. There may be more to it than meets the eye.

like image 94
Mats Petersson Avatar answered Oct 16 '22 12:10

Mats Petersson