Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Practical limitations on amount of constexpr computation

As an experiment, I just put together some code to generate a std::array<uint32_t, 256> at compile time. The table contents themselves are a fairly typical CRC lookup table - about the only new thing is the use of constexpr functions to calculate the entries as opposed to putting an autogenerated magic table directly in the source code.

Anyway, this exercise got me curious: would there be any practical limitations on the amount of computation a compiler would be willing to do to evaluate a constexpr function or variable definition at compile time? e.g. something like gcc's -ftemplate-depth parameter creating practical limits on the amount of template metaprogramming evaluation. (I also wonder if there might be practical limitations on the length of a parameter pack - which would limit the size of a compile-time std::array created using a std::integer_sequence intermediate object.)

like image 627
Daniel Schepler Avatar asked Oct 19 '22 06:10

Daniel Schepler


1 Answers

Recommendations for such can be found in [implimits] ¶2:

(2.35)   —   Recursive constexpr function invocations [512]

(2.36)   —   Full-expressions evaluated within a core constant expression [1 048 576]

GCC and Clang allow adjustment via -fconstexpr-depth (which is the flag you were looking for).

Constant expression evaluation practically runs in a sandbox, because undefined behavior must be preempted by the implementation. With that in mind, I don't see why the implementation couldn't use the entire resources of the host machine. Then again, I wouldn't recommend writing programs whose compilation requires gigabytes of memory or other unreasonable resources...

like image 106
Columbo Avatar answered Oct 21 '22 05:10

Columbo