I'm interesting in the time cost on a modern desktop CPU of some floating point operations in order to optimize a mathematical evaluation. In particular I'm interested on the comparison between complex operations like exp
, log
and simple operation like +
, *
, /
.
I tried to search for this information, but I could't find a source.
What is the cost of floating point operations?
Floating point operations are expensive because operations on floating point numbers are much more expensive than operations on integers. It's that simple. The format of integers makes addition and subtraction extremely simple to implement in hardware.
For example, y = x * 2 * (y + z*w) is 4 floating-point operations. Multiply the resulting number by the number of iterations. The result will be the number of instructions you're searching for.
The floating-point processor provides high-performance execution of floating-point operations. Instructions are provided to perform arithmetic, comparison, and other operations in floating-point registers, and to move floating-point data between storage and the floating-point registers.
Specific to floating-point numbers, a floating-point operation is any mathematical operation (such as +, -, *, /) or assignment that involves floating-point numbers (as opposed to binary integer operations). Floating-point numbers have decimal points in them.
Modern CPUs will do float + and - in a few clocks. Many will do * with a small number of clocks, but more than + and -. Divide is usually considerably slower than *. Transcendentals are slower than Divide.
You can likely get some ideas of speed by looking in Intel optimization manuals.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With