I have an algorithm that works with extremely large numbers around the order of 2 raised to the power 4,500,000. I use the BigInteger class in .NET 4 to handle these numbers.
The algorithm is very simple in that it is a single loop that reduces a large initial number based on some predefined criteria. With each iteration, the number is reduced by around 10 exponents so 4,500,000 would become 4,499,990 in the next iteration.
I am currently getting 5.16 iterations per second or 0.193798 seconds per iteration. Based on that the total time for the algorithm should be roughly 22 hours to bring the exponent value down to 0.
The problem is, as the number is reduced, the time required to process the number in memory is reduced as well. Plus, as the exponent reduces to the 200,000 range, the iterations per second become huge and the reduction per iteration also increases exponentially.
Instead of letting the algo run for a whole day, is there a mathematical way to calculate how much time it would take based on an initial starting number and iterations per second?
This would be very helpful since I can measure improvements of optimization attempts quickly.
Consider the following psuedocode:
double e = 4500000; // 4,500,000.
Random r = new Random();
while (e > 0)
{
BigInteger b = BigInteger.Power(2, e);
double log = BigInteger.Log(b, 10);
e -= Math.Abs(log * r.Next(1, 10));
}
First rewrite
double log = BigInteger.Log(b, 10);
as
double log = log(2)/log(10) * e; // approx 0.3 * e
Then you notice that the algorithm terminates after O(1) iterations (~70% termination chance on each iteration), you can probably neglect the cost of everything apart from the first iteration.
The total cost of your algo is about 1 to 2 times as expensive as Math.Pow(2, e) for the initial exponent e. For base=2 this is a trivial bitshift, for others you'll need square-and-multiply
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With