I'm working on this question and come up with a solution (May be one or two condition needs to be added) but not sure if this is the right way to do it and find it cumbersome to use two loops and not sure if this is the efficient way of doing it. It would be great if anyone has some nice trick to do it or any better approach would be welcome :). (Language is not a barrier)
My Algorithm:
void nextSmaller(int number) {
int firstZeroBitHelper = 1, nextOneBitHelper;
while (firstZeroBitHelper < number) {
// when we find first lsb zero bit we'll stop
bool bit = number & firstZeroBitHelper;
if (bit == false)
break;
firstZeroBitHelper = firstZeroBitHelper << 1;
}
if (firstZeroBitHelper >= number) {
cout << "No minimum number exists" << endl;
return;
}
nextOneBitHelper = firstZeroBitHelper;
nextOneBitHelper = nextOneBitHelper << 1;
while (nextOneBitHelper < number) {
// when we get '1' after the previous zero we stop
bool bit = number & nextOneBitHelper;
if (bit == true)
break;
nextOneBitHelper = nextOneBitHelper << 1;
}
// change the first zero to 1
number = number | firstZeroBitHelper;
// change the next set bit to zero
number = number & ~nextOneBitHelper;
cout << number << endl;
}
Continuing from my comment..
Well, I found it, and pretty quickly too. See The Art of Computer Programming chapter 7.1.3 (in volume 4A), answer to question 21: "the reverse of Gosper's hack".
It looks like this:
t = y + 1;
u = t ^ y;
v = t & y;
x = v - (v & -v) / (u + 1);
Where y
is the input and x
the result. The same optimizations as in Gosper's hack apply to that division.
Going upwards:
Going downwards:
An example to make the downwards case clear:
I'll explain the case of going upwards first, because it feels less tricky to me. We want to find the least-significant position where we can move a 1-bit one position left (in other words, the rightmost 0 that has a 1 to its right). It should be clear that this is the rightmost bit that we can possibly set, since we need to clear a bit somewhere else for every bit we set, and we need to clear a bit somewhere to the right of the bit we set, or else the number will get smaller instead of larger.
Now that we've set this bit, we want to clear one bit (to restore the total number of set bits), and reshuffle the remaining bits so that the number is as small as possible (this makes it the next greatest number with the same number of set bits). We can clear the bit to the right of the one we just set, and we can push any remaining 1-bits as far right as possible without fear of going below our original number, since all the less-significant bits together still add up to less than the single bit we just set.
Finding the next lower number instead of the next higher is basically the same, except that we're looking for the rightmost position where we can move a set bit one position right, and after doing that we want to move all less-significant bits as far left as possible.
It looks like others have got the bit-twiddling versions of this well in hand, but I wanted to see if I could give a good explanation of the logical/mathematical side of the algorithm.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With