I have a mysterious error with an algorithm for subtraction of unsigned integers of various lengths. It works for almost every pair of numbers, but if n isn't smaller than the number of bits in a cell then (2^n +1)-(2^n - 1) <> 2
. I can't understand why the algorithm doesn't work.
The numbers are stored in arrays in the "cellimal" system (base = 2^bits), with the least significant cells in lowmem. The array at ad1 is to be subtracted from the array at ad2, both of the same dimension len, and the result should be stored at ad2:
false borrow ! len 0
do i ad2 + @ borrow @ +
i ad1 + @ 2dup u< dup borrow !
if 1 swap 0 d- drop \ subtraction with borrow
else - \ subtraction without borrow
then i ad2 + ! cell
+loop
Note: I think the error comes from when borrowing from a cell that contains a zero value...
Perhaps someone can correct the algorithm?
Yes, we should keep carry sign when borrowing also.
The straightforward solution is just to use D-
everywhere:
0 borrow !
len 0 DO
ad2 I + @ 0
borrow @ 0 D-
ad1 I + @ 0 D-
ABS borrow !
ad2 I + !
cell +LOOP
Or some variation (the body of loop):
borrow @ S>D
ad2 I + @ 0 D+
ad1 I + @ 0 D-
borrow !
ad2 I + !
Perhaps, the right way is to use the idea of M+
operation instead.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With