Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

interpret unsigned as signed

I'm working on an embedded platform (ARM) and have to be careful when dealing with bit patterns. Let's pretend this line is beyond my influence:

uint8_t foo = 0xCE;          // 0b11001110

Interpreted as unsigned this would be 206. But actually it's signed, thus resembling -50. How can I continue using this value as signed?

int8_t bar = foo;            // doesn't work

neither do (resulting in 0x10 or 0x00 for all input values)

int8_t bar = static_cast<int8_t>(foo);
int8_t bar = reinterpret_cast<int8_t&>(foo);

I just want the bits to remain untouched, ie. (bar == 0xCE)

Vice versa I'd be interested how to get bit-patters, representing negative numbers, into unsigned variables without messing the bit-pattern. I'm using GCC.

like image 536
Sven-de Avatar asked Sep 10 '11 18:09

Sven-de


2 Answers

The following works fine for me, as it should though as the comments say, this is implementation-defined:

int x = (signed char)(foo);

In C++, you can also say:

int x = static_cast<signed char>(foo);

Note that promotion always tries to preserve the value before reinterpreting bit patterns. Thus you first have to cast to the signed type of the same size as your unsigned type to force the signed reinterpretation.

(I usually face the opposite problem when trying to print chars as pairs of hex digits.)

like image 135
Kerrek SB Avatar answered Sep 19 '22 07:09

Kerrek SB


uint8_t foo = 0xCE;          // 0b11001110
int8_t bar;
memcpy( &bar, &foo, 1 );

It even has the added bonus that 99% of compilers will completely optimise out the call to memcpy ...

like image 20
Goz Avatar answered Sep 22 '22 07:09

Goz