When I have a problem with optimal substructur and no subproblem shares subsubproblems then I can use a divide and conquer algorithm to solve it?
But when the subproblem shares subsubproblems (overlapping subproblems) then I can use dynamic programming to solve the problem?
Is this correct?
And how is greedy algorithms similar to dynamic programming?
Divide and Conquer vs Greedy approach D&C implements recursion in order to achieve a solution whereas greedy takes an iterative approach to solve the sub-problems. D&C uses the top-bottom approach, i.e. it breaks the larger problem into smaller sub-problems and then solves them to build a solution.
Dynamic programming approach is more reliable than greedy approach. Greedy method follows a top-down approach. As against, dynamic programming is based on bottom-up strategy. Greedy algorithm contains a unique set of feasible set of solutions where local choices of the subproblem leads to the optimal solution.
Greedy Method is also used to get the optimal solution. 2. In Dynamic Programming, we choose at each step, but the choice may depend on the solution to sub-problems.
When I have a problem with optimal substructur and no subproblem shares subsubproblems then I can use a divide and conquer algorithm to solve it?
Yes, as long as you can find an optimal algorithm for each kind of subproblem.
But when the subproblem shares subsubproblems (overlapping subproblems) then I can use dynamic programming to solve the problem?
Is this correct?
Yes. Dynamic programming is basically a special case of the family of Divide & Conquer algorithms, where all subproblems are the same.
And how is greedy algorithms similar to dynamic programming?
They're different.
Dynamic programming gives you the optimal solution.
A Greedy algorithm usually give a good/fair solution in a small amount of time but it doesn't assure to reach the optimum.
It is, let's say, similar because it usually divides the solution construction in several stages in which it takes choices that are locally optimal. But if stages are not optimal substructures of the original problem, then normally it doesn't lead to the best solution.
EDIT:
As pointed out by @rrenaud, there are some greedy algorithms that have been proven to be optimal (e.g. Dijkstra, Kruskal, Prim etc.).
So, to be more correct, the main difference between greedy and dynamic programming is that the former is not exhaustive on the space of solutions while the latter is.
In fact greedy algorithms are short-sighted on that space, and each choice made during solution construction is never reconsidered.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With