I have a 64-bit long int with some bitfields packed into it. I need a take a 16-bit signed int stored in the second and third bytes and add it to a 32-bit value. I'm using something like this:
u32 Function( s32 value , u64 bitfield )
{
return value + (s16) (bitfield >> 8)
}
Can I rely on the compiler to cast the bitfield to a 16-bit signed int before it expands it to a 32-bit signed int and performs the addition? If not, how else should I truncate the remaining bytes and perform the type conversion I require?
Yes, with the caveat that you're relying on compiler- and architecture-specific behavior. Of course, relying on this behavior will lead you into really difficult-to-diagnose "features" (bugs).
You're probably better off writing out (telling the compiler) what you want specifically, letting the optimization pass and other passes eliminate the unnecessary code:
u32 Function(s32 value, u64 bitfield)
{
// Extract 16 bit qty from 64-bit: (x|x|x|x|x|1|2|x), preserving
// signedness
return value + (s32) (((bitfield << 40) >> 48) & 0xffffffff);
}
(Yes, putting a comment there would also help future maintainers.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With