I did some tests on pow(exponent) method. Unfortunately, my math skills are not strong enough to handle the following problem.
I'm using this code:
BigInteger.valueOf(2).pow(var);
Results:
See? 2,500,000 exponent is calculated almost as fast as 2,000,000. 4,500,000 is calculated much faster then 4,000,000.
Why is that?
To give you some help, here's the original implementation of BigInteger.pow(exponent):
public BigInteger pow(int exponent) {
if (exponent < 0)
throw new ArithmeticException("Negative exponent");
if (signum==0)
return (exponent==0 ? ONE : this);
// Perform exponentiation using repeated squaring trick
int newSign = (signum<0 && (exponent&1)==1 ? -1 : 1);
int[] baseToPow2 = this.mag;
int[] result = {1};
while (exponent != 0) {
if ((exponent & 1)==1) {
result = multiplyToLen(result, result.length,
baseToPow2, baseToPow2.length, null);
result = trustedStripLeadingZeroInts(result);
}
if ((exponent >>>= 1) != 0) {
baseToPow2 = squareToLen(baseToPow2, baseToPow2.length, null);
baseToPow2 = trustedStripLeadingZeroInts(baseToPow2);
}
}
return new BigInteger(result, newSign);
}
The algorithm uses repeated squaring (squareToLen
) and multiplication (multiplyToLen
). The time for these operations to run depends on the size of the numbers involved. The multiplications of the large numbers near the end of the calculation are much more expensive than those at the start.
The multiplication is only done when this condition is true: ((exponent & 1)==1)
. The number of square operations depends on the number of bits in the number (excluding leading zeros), but a multiplication is only required for the bits that are set to 1. It is easier to see the operations that are required by looking at the binary representation of the number:
2000000: 0000111101000010010000000 2500000: 0001001100010010110100000 3000000: 0001011011100011011000000 3500000: 0001101010110011111100000 4000000: 0001111010000100100000000 4500000: 0010001001010101000100000 5000000: 0010011000100101101000000
Note that 2.5M and 4.5M are lucky in that they have fewer high bits set than the numbers surrounding them. The next time this happens is at 8.5M:
8000000: 0011110100001001000000000 8500000: 0100000011011001100100000 9000000: 0100010010101010001000000
The sweet spots are exact powers of 2.
1048575: 0001111111111111111111111 // 16408 ms 1048576: 0010000000000000000000000 // 6209 ms
Just a guess:
the exponent is handled bit by bit, and if the least significant bit is 1 additional work is done.
If L is the number of bits in the exponent and A the number of bits which are 1 and t1 the time to process the common part and t2 the additional time processing when the LSbit is 1
then the run time would be
Lt1 + At2
or the time is dependent on the number of 1's in the binary representation.
now writing a little program to verify my theory...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With