Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Compiler optimizations: Where/how can I get a feel for what the payoff is for different optimizations?

In my independent study of various compiler books and web sites, I am learning about many different ways that a compiler can optimize the code that is being compiled, but I am having trouble figuring out how much of a benefit each optimization will tend to give.

How do most compiler writers go about deciding which optimizations to implement first? Or which optimizations are worth the effort or not worth the effort? I realize that this will vary between types of code and even individual programs, but I'm hoping that there is enough similarity between most programs to say, for instance, that one given technique will usually give you a better performance gain than another technique.

like image 941
Andru Luvisi Avatar asked Nov 04 '08 00:11

Andru Luvisi


People also ask

What are some of the optimizations that can be performed by a compiler?

Typical interprocedural optimizations are: procedure inlining, interprocedural dead-code elimination, interprocedural constant propagation, and procedure reordering. As usual, the compiler needs to perform interprocedural analysis before its actual optimizations.

Why is compiler optimization important?

Optimizing compilers are a mainstay of modern software: allowing a programmer to write code in a language that makes sense to them, while transforming it into a form that makes sense for the underlying hardware to run efficiently.

What are optimization flags?

Turning on optimization flags makes the compiler attempt to improve the performance and/or code size at the expense of compilation time and possibly the ability to debug the program. The compiler performs optimization based on the knowledge it has of the program.


3 Answers

I found when implementing textbook compiler optimizations that some of them tended to reverse the improvements made by other optimizations. This entailed a lot of work trying to find the right balance between them.

So there really isn't a good answer to your question. Everything is a tradeoff. Many optimizations work well on one type of code, but are pessimizations for other types. It's like designing a house - if you make the kitchen bigger, the pantry gets smaller.

The real work in building an optimizer is trying out the various combinations, benchmarking the results, and, like a master chef, picking the right mix of ingredients.

like image 157
Walter Bright Avatar answered Oct 28 '22 00:10

Walter Bright


Tongue in cheek:

  1. Hubris
  2. Benchmarks
  3. Embarrassment

More seriously, it depends on your compiler's architecture and goals. Here's one person's experience...

Go for the "big payoffs":

  • native code generation
  • register allocation
  • instruction scheduling

Go for the remaining "low hanging fruit":

  • strength reduction
  • constant propagation
  • copy propagation

Keep bennchmarking.

Look at the output; fix anything that looks stupid.

It is usually the case that combining optimizations, or even repeating optimization passes, is more effective than you might expect. The benefit is more than the sum of the parts.

You may find that introduction of one optimization may necessitate another. For example, SSA with Briggs-Chaitin register allocation really benefits from copy propagation.

like image 21
Doug Currie Avatar answered Oct 28 '22 01:10

Doug Currie


Historically, there are "algorithmical" optimizations from which the code should benefit in most of the cases, like loop unrolling (and compiler writers should implement those "documented" and "tested" optimizations first).

Then there are types of optimizations that could benefit from the type of processor used (like using SIMD instructions on modern CPUs).

See Compiler Optimizations on Wikipedia for a reference.

Finally, various type of optimizations could be tested profiling the code or doing accurate timing of repeated executions.

like image 22
friol Avatar answered Oct 27 '22 23:10

friol