Can somebody explain what is going on here with this small program?
#include<stdio.h>
int main()
{
float a=0.577;
float b=0.921;
float c;
int i;
for (i=0;i<100000000;i+=1){
c=0.7*a-0.2*b;
//a=0.145*c+2.7*b;
}
printf ("%.3f\n",c);
}
Note, there is a line commented out.
I compiled it first without the line and then with the line. (Used gcc -O2 ...
). And measured the processing time. I was very surprised to find out that execution time was 0.001s
versus 2.444s
. And this doesn't make much sense. Or rather, there must be some logic behind this.
Can you please explain what is going on and how to mitigate this problem?
I work on an program that process huge amount of data and it seems to me that I run into very same performance problem there.
I was considering switching from floats to integers but it seems that with integers it behaves the same.
EDIT: At the end the solution was trivial and logical. So I thank for all answers and explanations!
In the first instance the calculated value was constant. The compiler would have calculated c = 0.7 * 0.577 - 0.2 * 0.921
at compile time. Its even free to optimize out the loop as nothing changes within it (a
, b
& c
are invariant).
In the second instance, a
and c
vary for each iteration so have to be computed 100000000
times.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With