I have a question about the Euclid's Algorithm for finding greatest common divisors.
gcd(p,q) where p > q and q is a n-bit integer.
I'm trying to follow a time complexity analysis on the algorithm (input is n-bits as above)
gcd(p,q)
if (p == q)
return q
if (p < q)
gcd(q,p)
while (q != 0)
temp = p % q
p = q
q = temp
return p
I already understand that the sum of the two numbers, u + v
where u
and v
stand for initial values of p
and q
, reduces by a factor of at least 1/2
.
Now let m
be the number of iterations for this algorithm.
We want to find the smallest integer m
such that (1/2)^m(u + v) <= 1
Here is my question.
I get that sum of the two numbers at each iteration is upper-bounded by (1/2)^m(p + q)
. But I don't really see why the max m
is reached when this quantity is <= 1
.
The answer is O(n) for n-bits q
, but this is where I'm getting stuck.
Please help!!
Imagine that we have p and q where p > q. Now, there are two cases:
1) p >= 2*q: in this case, p will be reduced to something less than q after mod, so the sum will be at most 2/3 of what is was before.
2) q < p < 2*q: in this case, a mod operation will be like subtracting q from p, so again the overall sum will be at most 2/3 of what is was before.
Therefore, in each step this sum will be 2/3 of the last sum. Since your numbers are n bits the magnitude of the sum is 2^{n+1}; so, with log 2^{n+1} (base 3/2) steps which is actually O(n), the sum will be 0.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With