Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Time complexity of the program using recurrence equation

I want to find out the time complexity of the program using recurrence equations. That is ..

int f(int x)
{
if(x<1) return 1;
 else  return f(x-1)+g(x); 
}
int g(int x)
{
if(x<2) return 1;
 else return f(x-1)+g(x/2);
}

I write its recurrence equation and tried to solve it but it keep on getting complex

T(n) =T(n-1)+g(n)+c
         =T(n-2)+g(n-1)+g(n)+c+c
         =T(n-3)+g(n-2)+g(n-1)+g(n)+c+c+c
         =T(n-4)+g(n-3)+g(n-2)+g(n-1)+g(n)+c+c+c+c
         ……………………….
        ……………………..
        Kth time …..
        =kc+g(n)+g(n-1)+g(n-3)+g(n-4).. .. . … +T(n-k)

Let at kth time input become 1
Then n-k=1
         K=n-1
Now i end up with this..
T(n)= (n-1)c+g(n)+g(n-1)+g(n-2)+g(n-3)+….. .. g(1)

I ‘m not able to solve it further. Any way if we count the number of function calls in this program , it can be easily seen that time complexity is exponential but I want proof it using recurrence . how can it be done ?

enter image description here

Explanation in Anwer 1, looks correct , similar work I did.

The most difficult task in this code is to write its recursion equation. I have drawn another diagram , I identified some patterns , I think we can get some help form this diagram what could be the possible recurrence equation.

For f(2)

For f(3)

And I came up with this equation , not sure if it is right ??? Please help.

T(n) = 2*T(n-1) + c * logn
like image 912
siddstuff Avatar asked Mar 24 '13 06:03

siddstuff


People also ask

How do you find the time complexity of a recursive program?

Since Sum(1) is computed using a fixed number of operations k1, T(1) = k1. If n > 1 the function will perform a fixed number of operations k2, and in addition, it will make a recursive call to Sum(n-1) . This recursive call will perform T(n-1) operations. In total, we get T(n) = k2 + T(n-1).

What is the time complexity of following recursive code?

The number of levels in the recursion tree is log2(N). The cost at the last level where the size of the problem is 1 and the number of subproblems is N. The time complexity of the above recurrence relation is O(N logN).

What is recurrence equation in algorithm?

The recurrence (4.5) describes the running time of an algorithm that divides a problem of size n into a subproblems, each of size n/b, where a and b are positive constants. The a subproblems are solved recursively, each in time T(n/b).


2 Answers

Ok, I think I have been able to prove that f(x) = Theta(2^x) (note that the time complexity is the same). This also proves that g(x) = Theta(2^x) as f(x) > g(x) > f(x-1).

First as everyone noted, it is easy to prove that f(x) = Omega(2^x).

Now we have the relation that f(x) <= 2 f(x-1) + f(x/2) (since f(x) > g(x))

We will show that, for sufficiently large x, there is some constant K > 0 such that

f(x) <= K*H(x), where H(x) = (2 + 1/x)^x

This implies that f(x) = Theta(2^x), as H(x) = Theta(2^x), which itself follows from the fact that H(x)/2^x -> sqrt(e) as x-> infinity (wolfram alpha link of the limit).

Now (warning: heavier math, perhap cs.stackexchange or math.stackexchange is better suited)

according to wolfram alpha (click the link and see series expansion near x = infinity),

H(x) = exp(x ln(2) + 1/2 + O(1/x))

And again, according to wolfram alpha (click the link (different from above) and see the series expansion for x = infinity), we have that

H(x) - 2H(x-1) = [1/2x + O(1/x^2)]exp(x ln(2) + 1/2 + O(1/x))

and so

[H(x) - 2H(x-1)]/H(x/2) -> infinity as x -> infinity

Thus, for sufficiently large x (say x > L) we have the inequality

H(x) >= 2H(x-1) + H(x/2)

Now there is some K (dependent only on L (for instance K = f(2L))) such that

f(x) <= K*H(x) for all x <= 2L

Now we proceed by (strong) induction (you can revert to natural numbers if you want to)

f(x+1) <= 2f(x) + f((x+1)/2)

By induction, the right side is

<= 2*K*H(x) + K*H((x+1)/2)

And we proved earlier that

2*H(x) + H((x+1)/2) <= H(x+1)

Thus f(x+1) <= K * H(x+1)

like image 195
Knoothe Avatar answered Oct 12 '22 22:10

Knoothe


Using memoisation, both functions can easily be computed in O(n) time. But the program takes at least O(2^n) time, and thus is a very inefficient way of computing f(n) and g(n)

To prove that the program takes at most O(2+epsilon)^n time for any epsilon > 0:

Let F(n) and G(n) be the number of function calls that are made in evaluating f(n) and g(n), respectively. Clearly (counting the addition as 1 function call):

F(0) = 1; F(n) = F(n-1) + G(n) + 1
G(1) = 1; G(n) = F(n-1) + G(n/2) + 1

Then one can prove:

  • F and G are monotonic
  • F > G
  • Define H(1) = 2; H(n) = 2 * H(n-1) + H(n/2) + 1
  • clearly, H > F
  • for all n, H(n) > 2 * H(n-1)
  • hence H(n/2) / H(n-1) -> 0 for sufficiently large n
  • hence H(n) < (2 + epsilon) * H(n-1) for all epsilon > 0 and sufficiently large n
  • hence H in O((2 + epsilon)^n) for any epsilon > 0
  • (Edit: originally I concluded here that the upper bound is O(2^n). That is incorrect,as nhahtdh pointed out, but see below)
  • so this is the best I can prove.... Because G < F < H they are also in O((2 + epsilon)^n) for any epsilon > 0

Postscript (after seeing Mr Knoothes solution): Because i.m.h.o a good mathematical proof gives insight, rather than lots of formulas, and SO exists for all those future generations (hi gals!):

For many algorithms, calculating f(n+1) involves twice (thrice,..) the amount of work for f(n), plus something more. If this something more becomes relatively less with increasing n (which is often the case) using a fixed epsilon like above is not optimal. Replacing the epsilon above by some decreasing function ε(n) of n will in many cases (if ε decreases fast enough, say ε(n)=1/n) yield an upper bound O((2 + ε(n))^n ) = O(2^n)

like image 44
13 revs Avatar answered Oct 12 '22 22:10

13 revs