Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

precalculated definition of variable vs initializing by calculation

Suppose for example, I wanted to initialize my variables using a function:

int x[10];
void init_x(){
    for(int i=0; i<10; ++i){
        x[i]=i*i;
    }
}

It doesn't have to be this exact function, it could be more complicated and done on a bigger array or a different int type, but anyway, the point is that the result is deterministic. My question is: would it be better (e.g. will my program initialize faster every time) to just calculate the result of this beforehand and just define it outright?

int x[10]={0, 1, 4, 9, etc...}

That way, I just run the initialization function once (e.g run the function, then copy+paste the results to the array definition and comment the code out) and not again and again every time I open the program. (At least that's what I assume it does)

Are there any disadvantages to doing this?

like image 284
J. K. P. C. Avatar asked Feb 07 '23 03:02

J. K. P. C.


2 Answers

If I understand your question correctly, you are asking if you can do the calculations at compile-time instead of at run-time and if there are caveats?

The answer depends on the complexity of the calculations. If they are simple (deterministic as you say) you can usually do this with success. The caveats are that the code for doing the computations, can be less than easy to read and it can greatly increase compile times.

The generalization of this technique is called meta-programming, where you add one extra level of code-transformation (compilation) before the usual code -> binary transformation.

You can do limited forms of that using the pre-processor. GCC also supports some expressions that are evaluated statically. Other techniques include using X-Macros to basically achieve parametric templates like in C++.

There are libraries that are able to perform Turing-complete computation at compile-time using the pre-processor (P99 for instance). Usually the syntax is hairy with much convention and many idioms to learn before being productive.

In contrast to complex meta-programming I've achieved greater code-clarity and appreciation from colleagues maintaining my code, when generating code using e.g. a Perl or Python script, than when I've hacked something together with the pre-processor.

EDIT:

To answer your question with an example, I'll tell you that I write a lot of C-code professionally for microcontrollers with 4-16kb RAM and 16-128kb flash code space. Most of the applications live for at least a decade, and will require running updates and feature additions. That means I have to take good care not to waste resources, so I'll always prefer if something can be calculated at compile-time instead of at run-time. That saves code space at the cost of added complexity in the build-system. If the data is constant, it also means I can place it in the read-only flash memory and save precious RAM.

Another example is in the aes-min project, which is a small implementation of AES128. I think there is a build choice, so that a component in the algorithm (the S-box?) gets pre-calculated and put in ROM instead of RAM. Other symmetric encryption algorithms need to calculate some data from the key, and if the key is static, this pre-calculation-technique can be used efficiently.

like image 170
Morten Jensen Avatar answered Mar 05 '23 14:03

Morten Jensen


All else being equal, human effort is way more expensive than cpu time or disk space. Do whatever requires the least upfront and ongoing level of human effort. Making a complicated multiple-stage build process may save a little cpu or disk, but it will cost effort.

like image 39
NovaDenizen Avatar answered Mar 05 '23 15:03

NovaDenizen