I have a high byte and a low byte I would like to convert to short.
I have implemented this, which seems to work, however I am a bit confused on why. Both high_byte and low_byte are cast as bytes.
short word = (short)(high_byte << 8 | low_byte);
In this code, should the high_byte << 8 be zero? Then I tried this:
(byte)1 << 8
which equals 256, which I thought should be 0. I guess I am clearly missing something.
Could someone please explain?
From the C# language specification, section 4.1.5:
The integral-type unary and binary operators always operate with signed 32-bit precision, unsigned 32-bit precision, signed 64-bit precision, or unsigned 64-bit precision:
...
For the binary
<<and>>operators, the left operand is converted to typeT, whereTis the first ofint,uint,long, andulongthat can fully represent all possible values of the operand. The operation is then performed using the precision of typeT, and the type of the result isT.
That is, whenever you apply any operators to integral types in C#, the result is always a minimum of 32-bits. There are other rules (given in the ...) for other operators, which define exactly how the final types are determined.
(As an aside, I'd have thought that this was important enough to mention in the C# Reference but I'm blowed if I can find it in there anywhere)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With