I have to do a sign extension for a 16-bit integer and for some reason, it seems not to be working properly. Could anyone please tell me where the bug is in the code? I've been working on it for hours.
int signExtension(int instr) {
int value = (0x0000FFFF & instr);
int mask = 0x00008000;
int sign = (mask & instr) >> 15;
if (sign == 1)
value += 0xFFFF0000;
return value;
}
The instruction (instr) is 32 bits and inside it I have a 16bit number.
Why is wrong with:
int16_t s = -890;
int32_t i = s; //this does the job, doesn't it?
what's wrong in using the builtin types?
int32_t signExtension(int32_t instr) {
int16_t value = (int16_t)instr;
return (int32_t)value;
}
or better yet (this might generate a warning if passed a int32_t
)
int32_t signExtension(int16_t instr) {
return (int32_t)instr;
}
or, for all that matters, replace signExtension(value)
with ((int32_t)(int16_t)value)
you obviously need to include <stdint.h>
for the int16_t
and int32_t
data types.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With