I'm noticing that cout << hex
is giving me strange results, and I cannot find anywhere that answers why. What I am doing is simply assigning some values to both a uint8_t
and uint16_t
and then attempting to write them to stdout. When I run this:
uint8_t a = 0xab;
uint16_t b = 0x24de;
cout << hex << a << endl;
cout << hex << b << endl;
That I get the result:
$./a.out
24de
$
with no value displayed for the uint8_t. What could be causing this? I didn't think there wouldn't be a cout implementation for one type for not the other.
An image whose data matrix has class uint8 is called an 8-bit image; an image whose data matrix has class uint16 is called a 16-bit image. The image function can display 8- or 16-bit images directly without converting them to double precision.
In simple terms, UINT8_T is defined for simplicity.. It just means Unsigned 8 bit integer.
std::uint8_t
is an alias for unsigned char
:
typedef unsigned char uint8_t;
So the overload of the inserter that takes a char&
is chosen, and the ASCII representation of 0xab
is written, which could technically vary by your operating system, as 0xab
is in the range of Extended ASCII.
You have to cast it to an integer:
std::cout << std::hex << static_cast<int>(a) << std::endl;
The other answers are correct about the reason. The simplest fix is:
cout << hex << +a << endl;
Demonstration: http://ideone.com/ZHHAHX
It works because operands undergo integer promotion.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With