Consider below inputs for typical Knapsack problem.
V = [10,8,12]
W = [2,3,7]
i = 1,2,3
C = 10
I tried recursion with memoization to solve this sample but found no overlapping sub problem.
Signature of the recursive procedure :
knapsack(int c, int i)
Called initially as knapsack(10,1)
The method of the solution is like as explained in https://www.youtube.com/watch?v=6h6Fi6AQiRM and https://www.youtube.com/watch?v=ocZMDMZwhCY.
How does Dynamic programming helps in reducing the time complexity for such samples of Knapsack ? If it does not help improving the time complexity of this case then does worst case complexity of DP solution also same as of back track search based i.e. 2 to the power n [ignoring the pruning, as if pruning applied then complexity will reduce for both the solution and again DP will not be better than non memoized recursive solution]
** Is overlapping in sub problems really missing in the above sample or I am missing something ?**
So Dynamic Programming is not useful when there are no common (overlapping) subproblems because there is no point in storing the solutions if they are not needed again. For example, Binary Search doesn't have common subproblems.
What is an overlapping sub-problem? Dynamic programming is used where the solutions of the same sub-problems are required again and again. It is similar to the divide and conquer approach, where the solution is computed by combining the solutions of all the sub-problems.
The 0/1 knapsack problem is solved by the dynamic programming.
A problem is said to have overlapping subproblems if the problem can be broken down into subproblems which are reused several times OR a recursive algorithm for the problem solves the same subproblem over and over rather than always generating new subproblems [2].
DP doesn't help at all on your specific problem instance. But in general, i.e. over all possible input instances, it never solves more subproblems than the pure recursion, and on many instances it solves much fewer. That's why DP is good.
All your DP formulation guarantees is that it can solve any instance of the problem by solving at most n(c+1)
subproblems, and indeed it does so for your example: here n
= 3 and c
= 10, and it solves the problem by solving 14 <= 33 subproblems (including the original problem).
Similarly, the purely recursive solution guarantees that it can solve any instance of the problem by solving at most 2^n
subproblems.
You seem to be thinking that the DP algorithm should solve every problem instance faster than the recursive algorithm, but this is not true, and no one is making this claim. There exist instances (like yours) for which there are no overlapping subproblems, and for these instances the DP solves the problem using the exact same number of subproblems as the recursive solution. This does not say anything about the behaviour of the two algorithms in general. In general, the DP solves every problem using at most as many subproblems as the recursive solution, and sometimes much fewer -- since there do exist problem instances for which the recursive algorithm needs to solve the same subproblem more than once.
In short: The DP is never worse than the recursion, and is better than the recursion in the worst case. That does not imply that it is better on every instance.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With