Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Do compilers automatically optimise repeated calls to mathematical functions?

Say I had this snippet of code:

#include <cmath>

// ...

float f = rand();
std::cout << sin(f) << " " << sin(f);

As sin(f) is a well defined function there is an easy optimisation:

float f = rand();
float sin_f = sin(f);
std::cout << sin_f << " " << sin_f;

Is this an optimisation that it's reasonable to expect a modern C++ compiler to do by itself? Or is there no way for the compiler to determine that sin(f) should always return the same value for an equal value of f?

like image 286
Tim MB Avatar asked Jan 24 '13 19:01

Tim MB


People also ask

How do compilers optimize?

Compiler optimization is generally implemented using a sequence of optimizing transformations, algorithms which take a program and transform it to produce a semantically equivalent output program that uses fewer resources or executes faster.

How do compilers optimize for loops?

The compiler can do the following: create a separate version of the loop for each possible value of the variable operation . The transformation is called loop unswitching, because there is a different version of the loop for each value of the condition.

Do compilers Optimise code?

Compilers are free to optimize code so long as they can guarantee the semantics of the code are not changed.

Why do compilers perform optimizations in code?

The code optimization in the synthesis phase is a program transformation technique, which tries to improve the intermediate code by making it consume fewer resources (i.e. CPU, Memory) so that faster-running machine code will result.


2 Answers

Using g++ built with default optimization flags:

float f = rand();
40117e: e8 75 01 00 00          call   4012f8 <_rand>
401183: 89 44 24 1c             mov    %eax,0x1c(%esp)
401187: db 44 24 1c             fildl  0x1c(%esp)
40118b: d9 5c 24 2c             fstps  0x2c(%esp)
std::cout << sin(f) << " " << sin(f);
40118f: d9 44 24 2c             flds   0x2c(%esp)
401193: dd 1c 24                fstpl  (%esp)
401196: e8 65 01 00 00          call   401300 <_sin>  <----- 1st call
40119b: dd 5c 24 10             fstpl  0x10(%esp)
40119f: d9 44 24 2c             flds   0x2c(%esp)
4011a3: dd 1c 24                fstpl  (%esp)
4011a6: e8 55 01 00 00          call   401300 <_sin>  <----- 2nd call
4011ab: dd 5c 24 04             fstpl  0x4(%esp)
4011af: c7 04 24 e8 60 40 00    movl   $0x4060e8,(%esp)

Built with -O2:

float f = rand();
4011af: e8 24 01 00 00          call   4012d8 <_rand>
4011b4: 89 44 24 1c             mov    %eax,0x1c(%esp)
4011b8: db 44 24 1c             fildl  0x1c(%esp)
std::cout << sin(f) << " " << sin(f);
4011bc: dd 1c 24                fstpl  (%esp)
4011bf: e8 1c 01 00 00          call   4012e0 <_sin>  <----- 1 call

From this we can see that without optimizations the compiler uses 2 calls and just 1 with optimizations, empirically I guess, we can say the compiler does optimize the call.

like image 171
imreal Avatar answered Nov 15 '22 17:11

imreal


I'm fairly certain GCC marks sin with the non-standard pure attribute, ie __attribute__ ((pure));

This has the following effect:

Many functions have no effects except the return value and their return value depends only on the parameters and/or global variables. Such a function can be subject to common subexpression elimination and loop optimization just as an arithmetic operator would be.

http://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html

And so there is a very good chance that such pure calls will be optimized with common subexpression elimination.

(update: actually cmath is using constexpr, which implies the same optimizations)

like image 31
Pubby Avatar answered Nov 15 '22 15:11

Pubby