I have an integer u=101057541.
Binary, this is equal to: 00000110 00000110 00000100 00000101
Now, I regard each byte as a seperate decimal (so 6, 6, 4, 5 in this case). I want to subtract -1 from the first byte, resulting in 6-1=5. I try to do this as follows:
int West = u | (((u>>24) - 1) << 24);
However, the result is the same as when I ADD 1 to this byte. Can someone explain why and tell me how to subtract -1 from this byte?
UPDATE: Thus, the result I want is the following binary number:
00000101 00000110 00000100 00000101
Because you're "or"-ing that byte back in:
u | (((u>>24) - 1) << 24);
should be
(u & mask) | (((u>>24) - 1) << 24);
where mask
is everything except the byte you're playing with.
You might find unsafe code easier:
int i = 101057541;
byte* b = (byte*)&i;
b[3]--; // note CPU endianness is important here
Console.WriteLine(i);
You can do the same thing without unsafe
using "spans" if you're using all the latest bits;
int i = 101057541;
var bytes = MemoryMarshal.Cast<int, byte>(MemoryMarshal.CreateSpan(ref i, 1));
bytes[3]--; // note CPU endianness is important here
Console.WriteLine(i);
or you could use a "union" via a struct
with explicit layout - so 4 bytes overlapping 1 int:
var x = new Int32Bytes();
x.Value = 101057541;
x.Byte3--; // note CPU endianness is important here
Console.WriteLine(x.Value);
with:
[StructLayout(LayoutKind.Explicit)]
struct Int32Bytes
{
[FieldOffset(0)]
public int Value;
[FieldOffset(0)]
public byte Byte0;
[FieldOffset(1)]
public byte Byte1;
[FieldOffset(2)]
public byte Byte2;
[FieldOffset(3)]
public byte Byte3;
}
When you subtract 1 from 00000110 the result is 00000101. You OR this with the original value and you get 0000111, which is like if you added 1.
As a one-liner to your problem, you should mask out the region of the bits you are manipulating.:
int West = (u & 0x00FFFFFF) | ((((u & 0xFF000000)>>24) - 1) << 24);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With