So I have 3 numbers. One is a char
, and the other two are int16_t
(also known as short
s, but according to a table I found shorts won't reliably be 16 bits).
I'd like to concatenate them together. So say that the values of them were:
10010001
1111111111111101
1001011010110101
I'd like to end up with a long long
containing:
1001000111111111111111011001011010110101000000000000000000000000
Using some solutions I've found online, I came up with this:
long long result;
result = num1;
result = (result << 8) | num2;
result = (result << 24) | num3;
But it doesn't work; it gives me very odd numbers when it's decoded.
In case there's a problem with my decoding code, here it is:
char num1 = num & 0xff;
int16_t num2 = num << 8 & 0xffff;
int16_t num3 = num << 24 & 0xffff;
What's going on here? I suspect it has to do with the size of a long long
, but I can't quite wrap my head around it and I want room for more numbers in it later.
To get the correct bit-pattern as you requested, you shoud use:
result = num1;
result = (result << 16) | num2;
result = (result << 16) | num3;
result<<=24;
This will yield the exact bit pattern that you requested, 24 bits at the lsb-end left 0:
1001000111111111111111011001011010110101000000000000000000000000
For that last shift, you should only be shifting by 16, not by 24. 24 is the current length of your binary string, after the combination of num1 and num2. You need to make room for num3, which is 16 bits, so shift left by 16.
Edit:
Just realized the first shift is wrong too. That should be 16 also, for similar reasons.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With