I have two numbers, x1
and x2
. For a number y
, I want to calculate the common divisor of x1
and x2
as close as possible to y
.
Is there an efficient algorithm for this?
I believe it's time to rephrase my problem and be more clear. This is not about integers...
So, say we have two numbers x1
and x2
. Say, the user inputs a number y
. What I want to find, is a number y'
close to y
so that x1 % y'
and x2 % y'
are very small (smaller than 0.02
, for example, but lets call this number LIMIT
). In other words, I don't need an optimal algorithm, but a good approximation.
I thank you all for your time and effort, that's really kind!
gcd(a, 0) = a and gcd(0, b) = b because everything divides 0. If a and b are both even, gcd(a, b) = 2*gcd(a/2, b/2) because 2 is a common divisor. Multiplication with 2 can be done with bitwise shift operator. gcd(a, b) = gcd(a, b/2).
So, Euclid's method for computing the greatest common divisor of two positive integers consists of replacing the larger number by the difference of the numbers, and repeating this until the two numbers are equal: that is their greatest common divisor. So gcd(48, 18) = 6.
I believe that there is no known efficient (polynomial-time) algorithm for this problem because there is a polynomial-time reduction from integer factorization to this problem. Since there is no known polynomial-time algorithm for integer factorization, there cannot be a known algortihm for your problem either, since otherwise we would indeed have a polynomial-time algorithm for integer factorization.
To see how this works, suppose you have a number n that you'd like to factor. Now, using whatever algorithm you'd like, find the common factor of n and n closest to √n. Since no nontrivial divisor of n can be greater than √n, this finds either (1) the largest integer that divides n, or (2) the number 1 if n is prime. You can then divide n by this number and repeat to produce all the factors of n. Since n can have at most O(log n) factors, this requires at most polynomially many iterations of the solver for your problem, so we have a polynomial-time reduction from integer factorization to this problem. As mentioned above, this means that, at least in the open literature, there is no known efficient classical algorithm for solving this problem. One might exist, but it would be a really hugely important result.
Sorry for the negative answer, and hope this helps!
This is efficient as I can get it:
from fractions import gcd
primes=[i for i in range(2,1000) if all(i%j!=0 for j in range(2,i))] #ensure you have enough primes.. (can improve efficency here)
def f(x1,x2,y):
_gcd=gcd(x1,x2)
if _gcd==1:
return 1
factors=(i for i in range(2,_gcd+1) if _gcd%i==0) #can improve efficiency here.. e.g. only go up to root(gcd)
r1=999999999
r2=999999999
for i in factors:
r1=min(r1,y%i)
r2=min(r2,i-y%i)
return y-r1 if r1<=r2 else y+r2
print f(8,4,3)
print f(16,12,5)
print f(997,53,44)
print f(2300*2,2300*3,57)
"""
2
4
1
56
"""
I think you can do it by greedy algorithm, first find GCD by common algorithms name it d
(which is computable in logarithmic time) then find factors of d
each time divide d
to smallest available factor (create d'
), and compare |d'-y|
with |d-y|
if is smaller continue in this way (and replace d'
with d
), else, multiply d'
with smallest eliminated factor, and again compare its distance to y.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With