I have this program
//h is our N
static int g=0;
int fun(int h){
if(h<=0){
g++;
return g;
}
return g+fun(h-1)+fun(h-4);
}
Is it possible to speed it up using dynamic programming?
I figured out that this function runs in O(2^n)
I am supposed to reduce the running time by dynamic programming, but do not understand the concept.
Just asking for a nudge in the right direction.
Dynamic Programming is a powerful technique that can be used to solve many problems in time O(n2) or O(n3) for which a naive approach would take exponential time.
Dynamic programming is nothing but recursion with memoization i.e. calculating and storing values that can be later accessed to solve subproblems that occur again, hence making your code faster and reducing the time complexity (computing CPU cycles are reduced).
Dynamic programming is an optimization approach that transforms a complex problem into a sequence of simpler problems; its essential characteristic is the multistage nature of the optimization procedure.
While I can't give an answer to your actual question, I am intrigued by something altogether different, namely the statement
return g+fun(h-1)+fun(n-4);
Obviously, your function has the side effect of changing the global static variable g
. I am not 100% sure whether the return
statement's expression actually evaluates in a clearly defined fashion, or whether the result might be undefined.
It might be a nice exercise to think about the order in which those function calls are executed, and how this affects g
and thereby the function's result.
If we define that summation order in g+fun(h-1)+fun(n-4) is from left to rigth, than this is good defined problem. With that I get values for fun(n), n=1,...,15:
3, 6, 10, 15, 33, 74, 154, 295, 575, 1143, 2269, 4414, 8508, 16396, 31634
Return value of fun(n) is evaluated as sequence of summations with not-descending elements. Each summand is for one larger then previous (return g++;) or same as previous (return g+fun()+fun()). Execution sequence of return statements depend only of fun() input parameter. So, with g set to initial value != 0 we get same summands as with g=0, but every summand is larger for same initial value. With that, fun(n) with initial g > 0 will return value that is g * number of executed return statements larger than with initial g = 0.
Define A(n) as number of executed return statements while executing fun(n), and G(n) number of executed return statement in if clause (same as number of g++ statement executions). For A and G holds:
A(n) = A(n-1) + A(n-4) + 1
G(n) = G(n-1) + G(n-4)
A(n) = 1 and G(n) = 1, for n <= 0
From these observations it can be seen that for n > 0 holds:
fun(n) = fun(n-1) + G(n-1) * A(n-4) + fun(n-4)
Simple python implementation:
def x( h ):
Rg = { -3:1, -2:1, -1:1, 0:1 }
Ra = { -3:1, -2:1, -1:1, 0:1 }
F = { -3:1, -2:1, -1:1, 0:1 }
for i in xrange( 1, h+1 ):
F[i] = F[i-1] + Rg[i-1]*Ra[i-4] + F[i-4]
print i, F[i]
Rg[i] = Rg[i-1] + Rg[i-4]
Ra[i] = Ra[i-1] + Ra[i-4] + 1
@stakx: for expression g+fun(h-1)+fun(h-4) we can't have evaluation order guaranty, especially not in C.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With