In this answer and the attached comments, Pavel Minaev makes the following argument that, in C, the only types to which uint8_t
can be typedef'd are char
and unsigned char
. I'm looking at this draft of the C standard.
uint8_t
implies the presence of a corresponding type int8_t
(7.18.1p1).int8_t
is 8 bits wide and has no padding bits (7.18.1.1p1).uint8_t
is also 8 bits wide.unsigned char
is CHAR_BIT
bits wide (5.2.4.2.1p2 and 6.2.6.1p3).CHAR_BIT
is at least 8 (5.2.4.2.1p1).CHAR_BIT
is at most 8, because either uint8_t
is unsigned char
, or it's a non-unsigned char
, non-bit-field type whose width is a multiple of CHAR_BIT
(6.2.6.1p4).Based on this argument, I agree that, if uint8_t
exists, then both it and unsigned char
have identical representations: 8 value bits and 0 padding bits. That doesn't seem to force them to be the same type (e.g., 6.2.5p14).
Is it allowed that uint8_t
is typedef'd to an extended unsigned integer type (6.2.5p6) with the same representation as unsigned char
? Certainly it must be typedef'd (7.18.1.1p2), and it cannot be any standard unsigned integer type other than unsigned char
(or char
if it happens to be unsigned). This hypothetical extended type would not be a character type (6.2.5p15) and thus would not qualify for aliased access to an object of an incompatible type (6.5p7), which strikes me as the reason a compiler writer would want to do such a thing.
uint8_t is an integer type, not a character type.
If the intended use of the variable is to hold an unsigned numerical value, use uint8_t; If the intended use of the variable is to hold a signed numerical value, use int8_t; If the intended use of the variable is to hold a printable character, use char.
For uint8_t to be, it must be 8-bits, no padding, exist because of an implementation provided integer type: matching the minimal requirements of unsigned char .
The difference between Uint8 and uint8_t will depend on implementation, but usually they will both be 8 bit unsigned integers. Also uint8_t and uint16_t are defined by C (and maybe C++) standard in stdint. h header, Uint8 and Uint16 are non-standard as far as I know.
If uint8_t
exists, the no-padding requirement implies that CHAR_BIT
is 8. However, there's no fundamental reason I can find why uint8_t
could not be defined with an extended integer type. Moreover there is no guarantee that the representations are the same; for example, the bits could be interpreted in the opposite order.
While this seems silly and gratuitously unusual for uint8_t
, it could make a lot of sense for int8_t
. If a machine natively uses ones complement or sign/magnitude, then signed char
is not suitable for int8_t
. However, it could use an extended signed integer type that emulates twos complement to provide int8_t
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With