Is it more efficient to do multiplication than raise to power 2 in c++?
I am trying to do final detailed optimizations. Will the compiler treat x*x the same as pow(x,2)? If I remember correctly, multiplication was better for some reason, but maybe it does not matter in c++11.
Thanks
If you're comparing multiplication with the pow()
standard library function then yes, multiplication is definitely faster.
I general, you should not worry about pico-optimizations like that unless you have evidence that there is a hot-spot (i.e. unless you've profiled your code under realistic scenarios and have identified a particular chunk of code. Also keep in mind that your clever tricks may actually cause performance regressions in new processors where your assumptions will no longer hold.
Algorithmic changes are where you will get the most bang for your computing buck. Focus on that.
Tinkering with multiplications and doing clever bit-hackery... eh not so much bang there* Because the current generation of optimizing compilers is really quite excellent at their job. That's not to say they can't be beat. They can, but not easily and probably only by a few people like Agner Fog.
* there are, of course, exceptions.
When it comes to performance, always make measurements to back up your assumptions. Never trust theory unless you have a benchmark that proves that theory right.
Also, keep in mind that x ^ 2
does not yield the square of 2 in C++:
#include <iostream>
int main()
{
int x = 4;
std::cout << (x ^ 2); // Prints 6
}
Live example.
The implementation of pow() typically involves logarithms, multiplication and expononentiaton, so it will DEFINITELY take longer than a simple multiplication. Most modern high end processors can do multiplication in a couple of clockcycles for integer values, and a dozen or so cycles for floating point multiply. exponentiation is either done as a complex (microcoded) instructions that takes a few dozen or more cycles, or as a series of multiplication and additions (typically with alternating positive and negative numbers, but not certainly). Exponentiation is a similar process.
On lower range processors (e.g. ARM or older x86 processors), the results are even worse. Hundreds of cycles in one floating point operation, or in some processors, even floating point calculations are a number of integer operations that perform the same steps as the float instructions on more advanced processors, so the time taken for pow()
could be thousands of cycles, compared to a dozen or so for a multiplication.
Whichever choice is used, the whole calculation will be significantly longer than a simple multiplication.
The pow()
function is useful when the exponent is either large, or not an integer. Even for relatively large exponents, you can do the calculation by squaring or cubing multiple times, and it will be faster than pow()
.
Of course, sometimes the compiler may be able to figure out what you want to do, and do it as a sequence of multiplications as a optimization. But I wouldn't rely on that.
Finally, as ALWAYS, for performance questions: If it's really important to your code, then measure it - your compiler may be smarter than you thin. If performance isn't important, then perform the calculation that is the makes the code most readable.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With