Most of the times the confusing fact is whether to go for an exhaustive search (dynamic programming or back tracking or brute force) to solve the problem or to go for the greedy approach.
I am not talking about using greedy to determine the best possible solution, I am talking about using greedy algorithm to find "the solution". I am trying to get some standard ways in which I can validate if the problem can be solved with greedy approach. Like Optimal substructure, memorization for dynamic programming. And not related any specific problem.
Are there any proof of induction I can do to decide if greedy approach will always produce the best solution?
One of the simplest methods for showing that a greedy algorithm is correct is to use a “greedy stays ahead” argument. This style of proof works by showing that, according to some measure, the greedy algorithm always is at least as far ahead as the optimal solution during each iteration of the algorithm.
To make a greedy algorithm, identify an optimal substructure or subproblem in the problem. Then, determine what the solution will include (for example, the largest sum, the shortest path, etc.). Create some sort of iterative way to go through all of the subproblems and build a solution.
In a previous post we gave some easy-to-state conditions under which greedy gives a good approximation, but the obvious question remains: can we characterize when greedy algorithms give an optimal solution to a problem? The answer is yes, and the framework that enables us to do this is called a matroid.
There would be only one optimal solution. The problem that requires either minimum or maximum result then that problem is known as an optimization problem. Greedy method is one of the strategies used for solving the optimization problems.
To prove that an optimization problem can be solved using a greedy algorithm, we need to prove that the problem has the following:
Optimal substructure property: an optimal global solution contains the optimal solutions of all its subproblems.
Greedy choice property: a global optimal solution can be obtained by greedily selecting a locally optimal choice.
Let us consider the classical activity selection problem. We have a set S of n activities, each one with a start time
s[i]
and an end timee[i]
. We want to choose a subset of S, such that the selection maximizes the number of non overlapping events.
This problem can be solved using a greedy algorithm, but how can we prove that?
We need to show it exhibits:
Consider a general activity a contained in a global optimal solution S = {A', a, A''}
, where S
is the global optimal solution, a is our little activity, and A'
and A''
are two subproblems of finding the activities before a and after a.
If the problem has the optimal substructure property, the optimal solution for the subproblems A'
and A''
must be contained in the global optimal solution S
.
This is true, but why?
Suppose that the optimal solution for the subproblem A'
is not in S
. Then we could substitute the optimal for A'
, say S'
, in A
, to obtain a new global optimal solution that is better than S
. But S
was the global optimal solution! Contradiction.
We need to prove that our greedy choice (to select the activity that ends first) is correct.
Let B
be a subproblem. Let b be the activity of the subproblem B
that ends first, thus b is our (first) greedy choice. We need to prove that b is included in some optimal solution for B
.
Let X
be an optimal solution for the subproblem B
. Let x be the activity in X
that ends first.
If b = x, then b is in X
, the optimal solution for B
, and we have shown that the greedy choice is included in the optimal solution.
If b != x, surely we have that end_time[b] < end_time[x]
.
Since b was our greedy choice (i.e. the activity that ends first of all in the subproblem B
), then we can substitute x
with b in X
to obtain another optimal solution: X' = (X \ {x}) U {b}
. X'
is optimal because it has the same number of activities of X
, and X
was optimal, and in this case, b is in X'
, the optimal solution for B
.
So our greedy choice b
is included in some optimal solution for B
in one case or the other.
Furthermore, there's a powerful mathematical theory that can be in some case used to mechanically prove that a particular problem can be solved with a greedy approach.
Briefly:
One can define a particular combinatoric structure named matroid.
Some smart man proved in the past that these matroids have the Optimal substructure property and the Greedy choice property.
This means that you can run a greedy algorithm on your optimization problem, and it will find the optimal solution. You only need to verify that your problem is defined on a matroid-like structure.
Further information about this can be found here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With