Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can the compile-time be (exponentially) faster than run-time?

The below code calculates Fibonacci numbers by an exponentially slow algorithm:

#include <cstdlib> #include <iostream>  #define DEBUG(var) { std::cout << #var << ": " << (var) << std::endl; }  constexpr auto fib(const size_t n) -> long long {     return n < 2 ? 1: fib(n - 1) + fib(n - 2); }  int main(int argc, char *argv[]) {     const long long fib91 = fib(91);      DEBUG( fib91 );     DEBUG( fib(45) );      return EXIT_SUCCESS; } 

And I am calculating the 45th Fibonacci number at run-time, and the 91st one at compile time.

The interesting fact is that GCC 4.9 compiles the code and computes fib91 in a fraction of a second, but it takes a while to spit out fib(45).

My question: If GCC is smart enough to optimize fib(91) computation and not to take the exponentially slow path, what stops it to do the same for fib(45)?

Does the above mean GCC produces two compiled versions of fib function where one is fast and the other exponentially slow?

The question is not how the compiler optimizes fib(91) calculation (yes! It does use a sort of memoization), but if it knows how to optimize the fib function, why does it not do the same for fib(45)? And, are there two separate compilations of the fib function? One slow, and the other fast?

like image 841
behzad.nouri Avatar asked Jul 11 '14 23:07

behzad.nouri


People also ask

Why is compile time faster?

Also the compiler will internally used optimized code. And it will convert the nonesense recursive approach to an iterative approach. And to calculate that values will take less time than the compiler overhead functions. So, that is the real reason.

What is difference between compile time and run time?

Compile time is the period when the programming code (such as C#, Java, C, Python) is converted to the machine code (i.e. binary code). Runtime is the period of time when a program is running and generally occurs after compile time.

Is Constexpr faster?

Try it on your code baseNot only will your code be faster and smaller, it'll be safer. Code marked constexpr can't bitrot as easily.

What occurs at compile time link time and run time?

Compile-time and Runtime are the two programming terms used in the software development. Compile-time is the time at which the source code is converted into an executable code while the run time is the time at which the executable code is started running.


1 Answers

GCC is likely memoizing constexpr functions (enabling a Θ(n) computation of fib(n)). That is safe for the compiler to do because constexpr functions are purely functional.

Compare the Θ(n) "compiler algorithm" (using memoization) to your Θ(φn) run time algorithm (where φ is the golden ratio) and suddenly it makes perfect sense that the compiler is so much faster.

From the constexpr page on cppreference (emphasis added):

The constexpr specifier declares that it is possible to evaluate the value of the function or variable at compile time.

The constexpr specifier does not declare that it is required to evaluate the value of the function or variable at compile time. So one can only guess what heuristics GCC is using to choose whether to evaluate at compile time or run time when a compile time computation is not required by language rules. It can choose either, on a case-by-case basis, and still be correct.

If you want to force the compiler to evaluate your constexpr function at compile time, here's a simple trick that will do it.

constexpr auto compute_fib(const size_t n) -> long long {     return n < 2 ? n : compute_fib(n - 1) + compute_fib(n - 2); }  template <std::size_t N> struct fib {     static_assert(N >= 0, "N must be nonnegative.");     static const long long value = compute_fib(N); }; 

In the rest of your code you can then access fib<45>::value or fib<91>::value with the guarantee that they'll be evaluated at compile time.

like image 67
Timothy Shields Avatar answered Sep 30 '22 20:09

Timothy Shields