I'm writing a datalog parser for a robot controller, and what's coming in from the data log is a number in the range of 0 - 65535 (which is a 16 bit unsigned integer if I'm not mistaken). I'm trying to convert that to a signed 16 bit integer to display to the user (since that was the actual datatype before the logger changed it).
Can someone give me a hand?
Example:
What the values should be (0, -1, -2, -3, -4)
What the values are (0, 65535, 65534, 65533, 65532)
To convert a signed integer to an unsigned integer, or to convert an unsigned integer to a signed integer you need only use a cast. For example: int a = 6; unsigned int b; int c; b = (unsigned int)a; c = (int)b; Actually in many cases you can dispense with the cast.
A 16-bit integer can store 216 (or 65,536) distinct values. In an unsigned representation, these values are the integers between 0 and 65,535; using two's complement, possible values range from −32,768 to 32,767.
Signed Integer: A 16-bit signed integer ranging from -32,768 to +32,767.
Have you tried explicit casting?
UInt16 x = 65535;
var y = (Int16)x; // y = -1
Using unchecked
here avoids a crash if [X] Check for Arithmetic Overflow
is on:
UInt16 x = 65535;
Int16 y = unchecked((Int16)x);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With