The ISO C standard allows three encoding methods for signed integers: two's complement, one's complement and sign/magnitude.
What's an efficient or good way to detect the encoding at runtime (or some other time if there's a better solution)? I want to know this so I can optimise a bignum library for the different possibilities.
I plan on calculating this and storing it in a variable each time the program runs so it doesn't have to be blindingly fast - I'm assuming the encoding won't change during the program run :-)
You just have to check the low order bits of the constant -1
with something like -1 & 3
. This evaluates to
This should even be possible to do in a preprocessor expression inside #if #else
constructs.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With