Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Double-Compilation of C Code to Decrease Execution Times

According to this article/video:

GCC recognizes several architecture-dependent optimizations for increasing the speed of C executables. Passing an executable through GCC a second time can often provide a dramatic performance improvement.

If you watched the video on the link, you can see this methods doubles the speed of an executable. I am not sure if this is general.

My question is: Why does this happen?

Bonus: What happen when we recompile a compiled program exactly?

like image 792
AK_ Avatar asked Jul 25 '14 22:07

AK_


1 Answers

This is a hoax.

Neither gcc nor any other compiler is capable of reading object code, “compiling” it, and producing object code that is faster.

The closest thing is feedback-directed compilation, where you first compile a program with instrumentation (e.g. gcc --fprofile-generate), run that program, generating a data file about the run (e.g. foo.gcda), and then compile the program again using the same source code and the data file as input to the compiler (e.g. gcc --fprofile-use). This can result in fairly modest speed-ups, typically between 5% and 10% in my experience.

Suppose that you have a long chain of 50 if … else if constructs (that isn't amenable to being restructured as a switch). This often occurs in Monte Carlo simulations, for example. If you're a reasonably experienced programmer, you'll probably order these so that the branch that's taken most often appears first. The idea is that, at runtime, you don't waste time considering 30 less likely branches before considering the most likely. Moreover, you will attempt to order these branches from most likely to least likely, so that, on average, the fewest number of branch tests is executed before the right one is found.

Note that the compiler has no basis for ordering these branches because the information that one is more likely than another simply isn't in the source code, so the best that can be done is to output the branches in source order.

With classical feedback-directed compilation, you first create an instrumented version of the executable that (when you run it) records how many times each branch is taken (or not) to a data file. The second time you compile, the compiler has empirical data from runtime (that it doesn't normally have) that can be used to reorder tests and insert branch hints that will make the code run faster… at least with workloads similar to the profiled test program.

I'm sure that modern feedback-directed compilation is considerably more sophisticated, but this is the general idea.

like image 200
Emmet Avatar answered Oct 05 '22 09:10

Emmet