I am checking the difference between two implementations of gradient descent, my guess was that with after compiler optimization both versions of the algorithm would be equivalent.
For my surprise, the recursive version was significantly faster. I haven't discard an actual defect on any of the versions or even in the way I am measuring the time. Can you guys give me some insights please?
This is my code:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
#include <stdint.h>
double f(double x)
{
return 2*x;
}
double descgrad(double xo, double xnew, double eps, double precision)
{
// printf("step ... x:%f Xp:%f, delta:%f\n",xo,xnew,fabs(xnew - xo));
if (fabs(xnew - xo) < precision)
{
return xnew;
}
else
{
descgrad(xnew, xnew - eps*f(xnew), eps, precision);
}
}
double descgraditer(double xo, double xnew, double eps, double precision)
{
double Xo = xo;
double Xn = xnew;
while(fabs(Xn-Xo) > precision)
{
//printf("step ... x:%f Xp:%f, delta:%f\n",Xo,Xn,fabs(Xn - Xo));
Xo = Xn;
Xn = Xo - eps * f(Xo);
}
return Xn;
}
int64_t timespecDiff(struct timespec *timeA_p, struct timespec *timeB_p)
{
return ((timeA_p->tv_sec * 1000000000) + timeA_p->tv_nsec) -
((timeB_p->tv_sec * 1000000000) + timeB_p->tv_nsec);
}
int main()
{
struct timespec s1, e1, s2, e2;
clock_gettime(CLOCK_MONOTONIC, &s1);
printf("Minimum : %f\n",descgraditer(100,99,0.01,0.00001));
clock_gettime(CLOCK_MONOTONIC, &e1);
clock_gettime(CLOCK_MONOTONIC, &s2);
printf("Minimum : %f\n",descgrad(100,99,0.01,0.00001));
clock_gettime(CLOCK_MONOTONIC, &e2);
uint64_t dif1 = timespecDiff(&e1,&s1) / 1000;
uint64_t dif2 = timespecDiff(&e2,&s2) / 1000;
printf("time_iter:%llu ms, time_rec:%llu ms, ratio (dif1/dif2) :%g\n", dif1,dif2, ((double) ((double)dif1/(double)dif2)));
printf("End. \n");
}
I am compiling with gcc 4.5.2 on Ubuntu 11.04 with the following options: gcc grad.c -O3 -lrt -o dg
The output of my code is:
Minimum : 0.000487
Minimum : 0.000487
time_iter:127 ms, time_rec:19 ms, ratio (dif1/dif2) :6.68421
End.
I read a thread which also ask about a recursive version of an algorithm being faster than the iterative one. The explanation over there was that being the recursive version using the stack and the other version using some vectors the access on the heap was slowing down the iterative version. But in this case (in the best of my understanding) I am just using the stack on both cases.
Am I missing something? Anything obvious that I am not seeing? Is my way of measuring time wrong? Any insights?
EDIT: Mystery solved in a comment. As @TonyK said the initialization of the printf was slowing down the first execution. Sorry that I missed that obvious thing.
BTW, The code compiles just right without warnings. I don't think the "return descgrad(.." is necessary since the stop condition is happening before.
The recursive function runs much faster than the iterative one. The reason is because in the latter, for each item, a CALL to the function st_push is needed and then another to st_pop . In the former, you only have the recursive CALL for each node. Plus, accessing variables on the callstack is incredibly fast.
Iteration can be used to repeatedly execute a set of statements without the overhead of function calls and without using stack memory. Iteration is faster and more efficient than recursion. It's easier to optimize iterative codes, and they generally have polynomial time complexity.
Recursion has a large amount of overhead as compared to Iteration. It is usually much slower because all function calls must be stored in a stack to allow the return back to the caller functions. Iteration does not involve any such overhead.
Recursion uses more memory but is sometimes clearer and more readable. Using loops increases the performance, but recursion can sometimes be better for the programmer (and their performance).
I've compiled and run your code locally. Moving the printf
outside of the timed block makes both versions execute in ~5ms every time.
So a central mistake in your timing is that you measure the complex beast of printf
and its runtime dwarfs the code you are actually trying to measure.
My main()
-function now looks like this:
int main() {
struct timespec s1, e1, s2, e2;
double d = 0.0;
clock_gettime(CLOCK_MONOTONIC, &s1);
d = descgraditer(100,99,0.01,0.00001);
clock_gettime(CLOCK_MONOTONIC, &e1);
printf("Minimum : %f\n", d);
clock_gettime(CLOCK_MONOTONIC, &s2);
d = descgrad(100,99,0.01,0.00001);
clock_gettime(CLOCK_MONOTONIC, &e2);
printf("Minimum : %f\n",d);
uint64_t dif1 = timespecDiff(&e1,&s1) / 1000;
uint64_t dif2 = timespecDiff(&e2,&s2) / 1000;
printf("time_iter:%llu ms, time_rec:%llu ms, ratio (dif1/dif2) :%g\n", dif1,dif2, ((double) ((double)dif1/(double)dif2)));
printf("End. \n");
}
Is my way of measuring time wrong?
Yes. In the short timespans you are measuring, the scheduler can have a massive impact on your program. You need to either make your test much longer to average such differences out, or to use CLOCK_PROCESS_CPUTIME_ID
instead to measure the CPU time used by your process.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With