Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Good examples, articles, books for understanding dynamic programming [closed]

People also ask

Which is the best example for dynamic programming?

Dynamic Programming ExampleA fibonacci series is the sequence of numbers in which each number is the sum of the two preceding ones. For example, 0,1,1, 2, 3 . Here, each number is the sum of the two preceding numbers. Let n be the number of terms.


Dynamic programming is a useful type of algorithm that can be used to optimize hard problems by breaking them up into smaller subproblems. By storing and re-using partial solutions, it manages to avoid the pitfalls of using a greedy algorithm. There are two kinds of dynamic programming, bottom-up and top-down.

In order for a problem to be solvable using dynamic programming, the problem must possess the property of what is called an optimal substructure. This means that, if the problem was broken up into a series of subproblems and the optimal solution for each subproblem was found, then the resulting solution would be realized through the solution to these subproblems. A problem that does not have this structure cannot be solved with dynamic programming.

Top-Down

Top-down is better known as memoization. It is the idea of storing past calculations in order to avoid re-calculating them each time.

Given a recursive function, say:

fib(n) = 0 if n = 0
         1 if n = 1
         fib(n - 1) + fib(n - 2) if n >= 2

We can easily write this recursively from its mathematic form as:

function fib(n)
  if(n == 0 || n == 1)
    n
  else
    fib(n-1) + fib(n-2)

Now, anyone that has been programming for awhile or knows a thing or two about algorithmic efficiency will tell you that this is a terrible idea. The reason is that, at each step, you must to re-calculate the value of fib(i), where i is 2..n-2.

A more efficient example of this is storing these values, creating a top-down dynamic programming algorithm.

m = map(int, int)
m[0] = 0
m[1] = 1
function fib(n)
  if(m[n] does not exist)
    m[n] = fib(n-1) + fib(n-2)

By doing this, we calculate fib(i) at most once.


Bottom-Up

Bottom-up uses the same technique of memoization that is used in top-down. The difference, however, is that bottom-up uses comparative sub-problems known as recurrences to optimize your final result.

In most bottom-up dynamic programming problems, you are often trying to either minimize or maximize a decision. You are given two (or more) options at any given point and you have to decide which is more optimal for the problem you're trying to solve. These decisions, however, are based on previous choices you made.

By making the most optimal decision at each point (each subproblem), you are making sure that your overall result is the most optimal.

The most difficult part of these problems is finding the recurrence relationships for solving your problem.

To pay for a bunch of algorithm textbooks, you plan to rob a store that has n items. The problem is that your tiny knapsack can only hold at most W kg. Knowing the weight (w[i]) and value (v[i]) of each item, you want to maximize the value of your stolen goods that all together weight at most W. For each item, you must make a binary choice - take it or leave it.

Now, you need to find what the subproblem is. Being a very bright thief, you realize that the maximum value of a given item, i, with a maximum weight, w, can be represented m[i, w]. In addition, m[0, w] (0 items at most weight w) and m[i, 0] (i items with 0 max weight) will always be equal to 0 value.

so,

m[i, w] = 0 if i = 0 or w = 0

With your thinking full-face mask on, you notice that if you have filled your bag with as much weight as you can, a new item can't be considered unless its weight is less than or equal to the difference between your max weight and the current weight of the bag. Another case where you might want to consider an item is if it has less than or equal weight of an item in the bag but more value.

 m[i, w] = 0 if i = 0 or w = 0
           m[i - 1, w] if w[i] > w
           max(m[i - 1, w], m[i - 1, w - w[i]] + v[i]) if w[i] <= w

These are the recurrence relations described above. Once you have these relations, writing the algorithm is very easy (and short!).

v = values from item1..itemn
w = weights from item1..itemn
n = number of items
W = maximum weight of knapsack
   
m[n, n] = array(int, int)
function knapsack
  for w=0..W
    m[0, w] = 0
  for i=1 to n
    m[i, 0] = 0
    for w=1..W
      if w[i] <= w
        if v[i] + m[i-1, w - w[i]] > m[i-1, w]
           m[i, w] = v[i] + m[i-1, w - w[i]]
        else
           m[i, w] = m[i-1, w]
      else
        m[i, w] = c[i-1, w]
  
  return m[n, n]

Additional Resources

  1. Introduction to Algorithms
  2. Programming Challenges
  3. Algorithm Design Manual

Example Problems

Luckily, dynamic programming has become really in when it comes to competitive programming. Check out Dynamic Programming on UVAJudge for some practice problems that will test your ability to implement and find recurrences for dynamic programming problems.


In short, Dynamic Programming is a method to solve complex problems by breaking them down into simpler steps, that is, going through solving a problem step-by-step.

  1. Dynamic programming;
  2. Introduction to Dynamic Programming;
  3. MIT's Introduction to Algorithms, Lecture 15: Dynamic Programming;
  4. Algorithm Design (book).

I hope this links will help at least a bit.


Start with

  • wikipedia article about dynamic programming then
  • I suggest you read this article in topcoder
  • ch6 about dynamic programming in algorithms (Vazirani)
  • Dynamic programming chapter in Algorithms Design Manual
  • Dynamic programming chapter in algorithms classical book (Introduction to Algorithms)

If you want to test yourself my choices about online judges are

  • Uva Dynamic programming problems
  • Timus Dynamic programming problems
  • Spoj Dynamic programming problems
  • TopCoder Dynamic programming problems

and of course

  • look at algorithmist dynamic programming category

You can also checks good universities algorithms courses

  • Aduni (Algorithms)
  • MIT (Introduction to Algorithms (chapter 15))

After all, if you can't solve problems ask SO that many algorithms addict exist here


See below

  • http://www.topcoder.com/tc?d1=tutorials&d2=dynProg&module=Static

and there are too many samples and articles reference at above article.

After you learning dynamic programming you can improve your skill by solving UVA problems, There are lists of some UVA dynamic programming problems in discussion board of UVA

Also Wiki has a good simple samples for it.

Edit: for book algorithm about it, you can use:

Also you should take a look at Memoization in dynamic programming.


I think Algebraic Dynamic Programming worth mentioning. It's quite inspiring presentation of DP technique and is widely used in bioinformatics community. Also, Bellman's principle of optimality stated in very comprehensible way.

Traditionally, DP is taught by example: algorithms are cast in terms of recurrences between table entries that store solutions to intermediate problems, from this table the overall solution is constructed via some case analysis.

ADP organizes DP algorithm such that problem decomposition into subproblems and case analysis are completely separated from the intended optimization objective. This allows to reuse and combine different parts of DP algorithms for similar problems.

There are three loosely coupled parts in ADP algorithm:

  • building search space (which is stated in terms of tree grammars);
  • scoring each element of the search space;
  • objective function selecting those elements of the search space, that we are interested in.

All this parts then automatically fused together yielding effective algorithm.


This USACO article is a good starting point to understand the basics of DP and how it can give tremendous speed-ups. Then look at this TopCoder article which also covers the basics, but isn't written that well. This tutorial from CMU is also pretty good. Once you understand that, you will need to take the leap to 2D DP to solve the problem you refer to. Read through this Topcoder article up to and including the apples question (labelled intermediate).

You might also find watching this MIT video lecture useful, depending on how well you pick things up from videos.

Also be aware that you will need to have a solid grasp of recursion before you will successfully be able to pick up DP. DP is hard! But the real hard part is seeing the solution. Once you understand the concept of DP (which the above should get you to) and you're giving the sketch of a solution (e.g. my answer to your question then it really isn't that hard to apply, since DP solutions are typically very concise and not too far off from iterative versions of an easier-to-understand recursive solution.

You should also have a look at memoization, which some people find easier to understand but it is often just as efficient as DP. To explain briefly, memoization takes a recursive function and caches its results to save re-computing the results for the same arguments in future.


Only some problems can be solved with Dynamic Programming

Since no-one has mentioned it yet, the properties needed for a dynamic programming solution to be applicable are:

  • Overlapping subproblems. It must be possible to break the original problem down into subproblems in such a way that some subproblems occur more than once. The advantage of DP over plain recursion is that each of these subproblems will be solved only once, and the results saved and reused if necessary. In other words, DP algorithms trade memory for time.
  • Optimal substructure. It must be possible to calculate the optimal solution to a subproblem using only the optimal solutions to subproblems. Verifying that this property holds can require some careful thinking.

Example: All-Pairs Shortest Paths

As a typical example of a DP algorithm, consider the problem of finding the lengths of the shortest paths between all pairs of vertices in a graph using the Floyd-Warshall algorithm.

Suppose there are n vertices numbered 1 to n. Although we are interested in calculating a function d(a, b), the length of the shortest path between vertices a and b, it's difficult to find a way to calculate this efficiently from other values of the function d().

Let's introduce a third parameter c, and define d(a, b, c) to be the length of the shortest path between a and b that visits only vertices in the range 1 to c in between. (It need not visit all those vertices.) Although this seems like a pointless constraint to add, notice that we now have the following relationship:

d(a, b, c) = min(d(a, b, c-1), d(a, c, c-1) + d(c, b, c-1))

The 2 arguments to min() above show the 2 possible cases. The shortest way to get from a to b using only the vertices 1 to c either:

  1. Avoids c (in which case it's the same as the shortest path using only the first c-1 vertices), or
  2. Goes via c. In this case, this path must be the shortest path from a to c followed by the shortest path from c to b, with both paths constrained to visit only vertices in the range 1 to c-1 in between. We know that if this case (going via c) is shorter, then these 2 paths cannot visit any of the same vertices, because if they did it would be shorter still to skip all vertices (including c) between them, so case 1 would have been picked instead.

This formulation satisfies the optimal substructure property -- it is only necessary to know the optimal solutions to subproblems to find the optimal solution to a larger problem. (Not all problems have this important property -- e.g. if we wanted to find the longest paths between all pairs of vertices, this approach breaks down because the longest path from a to c may visit vertices that are also visited by the longest path from c to b.)

Knowing the above functional relationship, and the boundary condition that d(a, b, 0) is equal to the length of the edge between a and b (or infinity if no such edge exists), it's possible to calculate every value of d(a, b, c), starting from c=1 and working up to c=n. d(a, b, n) is the shortest distance between a and b that can visit any vertex in between -- the answer we are looking for.