I like uint8_t because I feel that the intent (binary data, not characters) is better expressed. However, there are many char-based I/O functions which I might want to call. Just wondering if there's a best practice that folks conform to out there? This is my first foray into byte-level I/O.
The problem with char
is that it may be either a signed type (with a range of at least -127..+127) or an unsigned type (with a range of at least 0..255).
unsigned char
is better than plain char
for binary byte-oriented data.
uint8_t
, if it exists, will have the same range and representation as unsigned char
, and will probably be just another name for the same type.
Note that C++ (like C) defined a byte as the smallest addressible storage unit on a given system. It's at least 8 bits, and will probably be exactly 8 bits on any system you're likely to use, but it's allowed to be wider. The macro CHAR_BIT
, defined in <stddef.h>
or <cstddef>
, tells you the number of bits in a byte.
If you want to absolutely guarantee that you're using 8-bit bytes, either query the value of CHAR_BIT
or use uint8_t
. If bytes are bigger than 8 bits, then uint8_t
won't exist (because there will be no type that meets its requirements).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With