Suppose you have a computationally expensive method, Compute(p)
, which returns some float, and another method, Falloff(p)
, which returns another float from zero to one.
If you compute Falloff(p) * Compute(p)
, will Compute(p)
still run when Falloff(p)
returns zero? Or would you need to write a special case to prevent Compute(p) from running unnecessarily?
Theoretically, an optimizing compiler could determine that omitting Compute when Falloff returns zero would have no effect on the program. However, this is kind of hard to test, since if you have Compute output some debug data to determine whether it is running, the compiler would know not to omit it because of that debug info, resulting in sort of a Schrodinger's cat situation.
I know the safe solution to this problem is just to add the special case, but I'm just curious.
Generally speaking a compiler will know that a function call could have side-effects (infinite loops, exceptions, etc.) and will not optimize it out. On the other hand, there are such things as whole-program optimizers that can determine that a function has no side-effects and thus omit it when its return value is not used.
Note that if your function returns an IEEE float and it is multiplied by 0, you cannot safely omit the function call unless you can determine that it always returns a real number. If it can return an Inf
or NaN
, the multiplication by 0 is not a nop and must be performed.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With