I've recently read an interesting thread on the D newsgroup, which basically asks,
Given two (signed) integers a ∈ [amin, amax], b ∈ [bmin, bmax], what is the tightest interval of a | b?
I'm think if interval arithmetics can be applied on general bitwise operators (assuming infinite bits). The bitwise-NOT and shifts are trivial since they just corresponds to -1 − x and 2nx. But bitwise-AND/OR are a lot trickier, due to the mix of bitwise and arithmetic properties.
Is there a polynomial-time algorithm to compute the intervals of bitwise-AND/OR?
Note:
~(a | b) = ~a & ~b
, solving the bitwise-AND and -NOT problem implies bitwise-OR is done. [2, 3] | [8, 9] = [10, 11]
.)Bitwise operations are incredibly simple and thus usually faster than arithmetic operations.
The | (bitwise OR) in C or C++ takes two numbers as operands and does OR on every bit of two numbers. The result of OR is 1 if any of the two bits is 1. The ^ (bitwise XOR) in C or C++ takes two numbers as operands and does XOR on every bit of two numbers. The result of XOR is 1 if the two bits are different.
Numerical methods using interval arithmetic can guarantee reliable and mathematically correct results. Instead of representing a value as a single number, interval arithmetic represents each value as a range of possibilities.
Bitwise operators perform bit manipulations between two expressions of any of the data types of the integer data type category. Bitwise operators convert two integer values to binary bits, perform the AND, OR, or NOT operation on each bit, producing a result.
Argh. Are you sure these have to be signed integers? That just brings up a pile of annoying cases where you have to flip things.
If we limit ourselves to unsigned integers, we can just walk down the bits to find the maximum. Any bit above the highest bit set in max(a_max , b_max)
obviously can't be on. Assume without loss of generality that a_max > b_max. Keep all the bits in a_max until we hit the highest bit in b_max. Then keep all the bits of both until we have flexibility on at least one side (i.e. we can choose a number in the range allowed that will flip that bit). If the other side cannot set that bit to 1, set it to 1 and keep going (setting one higher bit always beats setting all lower bits). Otherwise, set your answer to (that bit - 1), which will places 1's in all the rest of the bits and you're done.
Now we do the same sort of thing for the minimum, except we avoid setting bits at every opportunity, but take every opportunity to pair bits if one side must set one.
This is O(n) in the number of bits if we can do math on the integers in O(1) time. Otherwise, it's O(n^2).
Here's how it works on your example of [2,3] | [8,9]
101 -> 1xx works
10 to 11 -> x1x always set ; 11x doesn't fit in a so we're not done
11 can set last bit -> 111
100 -> 1xx must be set
10 to 11 -> x1x must be set ; 11x doesn't fit so we're not done
10 has xx0 as does 100 -> xx0 works -> 110
Edit: adding sign bits doesn't change the order of the algorithm, but it does require more annoying bookkeeping. If you cannot get rid of a sign bit, then you flip min and max strategies (i.e. set vs. don't set bits). If it's optional, the min is when you set it and then try to keep everything else unset; the max is when you unset it and then try to keep everything else set.
Second edit: here's another example; both ranges are [1001,1100]:
Must keep first bit -> 1xxx
Can set 2nd bit -> x1xx
Other _could have but did not need to_ set 2nd bit -> set 2nd bit -1 -> xx11
-> 1111 is max
Must keep first bit -> 1xxx
Neither needs 2nd bit -> x0xx
Neither needs 3rd bit -> xx0x
Both need 4th bit -> xxx1
-> 1001 is min
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With