Could you please help me figure out why the following expression is true: x + y = x ^ y + (x & y) << 1
I am looking for some rules from the bitwise logic to explain this mathematical equivalent.
It's like solving an ordinary base 10 addition problem 955 + 445
, by first adding all the columns individually and throwing away carried 1
s:
955
445
-----
390
Then finding all the columns where there should be a carried 1
:
955
445
-----
101
Shifting this and adding it to the original result:
390
+ 1010
------
1400
So basically you're doing addition but ignoring all the carried 1
s, and then adding in the carried ones after, as a separate step.
In base 2, XOR (^
) correctly performs addition when either of the bits is a 0
. When both bits are 1
, it performs addition without carry, just like we did in the first step above.
x ^ y
correctly adds all the bits where x
and y
are not both 1
:
1110111011
^ 0110111101
-------------
1000000110 (x ^ y)
x & y
gives us a 1
in all the columns where both bits are a 1. These are exactly the columns where we missed a carry:
1110111011
& 0110111101
-------------
0110111001 (x & y)
Of course when you carry a 1
when doing addition you shift it left one place, just like when you add in base 10.
1000000110 (x ^ y)
+ 01101110010 + (x & y) << 1
-------------
10101111000
x + y
is not equivalent to x ^ y + (x & y) << 1
However, your expression above will evaluate to true for most values since =
means assignment and non-zero values mean true. ==
will test for equality.
EDITx ^ y + ((x & y) << 1)
is correct with parentheses. The AND finds where a carry would happen and the shift carries it. The XOR finds where and addition would happen with no carry. Adding the two together unifies the result.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With