Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Dynamic programming in F#

What is the most elegant way to implement dynamic programming algorithms that solve problems with overlapping subproblems? In imperative programming one would usually create an array indexed (at least in one dimension) by the size of the problem, and then the algorithm would start from the simplest problems and work towards more complicated once, using the results already computed.

The simplest example I can think of is computing the Nth Fibonacci number:

int Fibonacci(int N)
{
    var F = new int[N+1];
    F[0]=1;
    F[1]=1;
    for(int i=2; i<=N; i++)
    {
        F[i]=F[i-1]+F[i-2];
    }
    return F[N];
}

I know you can implement the same thing in F#, but I am looking for a nice functional solution (which is O(N) as well obviously).

like image 835
Grzenio Avatar asked Nov 02 '11 18:11

Grzenio


People also ask

What is dynamic programming programming?

Dynamic Programming is a technique in computer programming that helps to efficiently solve a class of problems that have overlapping subproblems and optimal substructure property.

What are the 2 dynamic programming methods?

There are two approaches to dynamic programming: Top-down approach. Bottom-up approach.

What is dynamic programming explain with example?

Characteristics of dynamic programming For example, in the Fibonacci sequence, each number in the series is the sum of its two preceding numbers (0, 1, 1, 2, 3, 5, 8,...). If you want to calculate the nth Fibonacci value in the sequence, you can break down the entire problem into smaller subproblems.

What is a dynamic programming equation?

The optimality equation (1.3) is also called the dynamic programming equation (DP) or Bellman equation. The DP equation defines an optimal control problem in what is called feedback or closed loop form, with ut = u(xt,t).


1 Answers

One technique that is quite useful for dynamic programming is called memoization. For more details, see for example blog post by Don Syme or introduction by Matthew Podwysocki.

The idea is that you write (a naive) recursive function and then add cache that stores previous results. This lets you write the function in a usual functional style, but get the performance of algorithm implemented using dynamic programming.

For example, a naive (inefficient) function for calculating Fibonacci number looks like this:

let rec fibs n = 
  if n < 1 then 1 else
  (fibs (n - 1)) + (fibs (n - 2))

This is inefficient, because when you call fibs 3, it will call fibs 1 three times (and many more times if you call, for example, fibs 6). The idea behind memoization is that we write a cache that stores the result of fib 1 and fib 2, and so on, so repeated calls will just pick the pre-calculated value from the cache.

A generic function that does the memoization can be written like this:

open System.Collections.Generic

let memoize(f) =    
  // Create (mutable) cache that is used for storing results of 
  // for function arguments that were already calculated.
  let cache = new Dictionary<_, _>()
  (fun x ->
      // The returned function first performs a cache lookup
      let succ, v = cache.TryGetValue(x)
      if succ then v else 
        // If value was not found, calculate & cache it
        let v = f(x) 
        cache.Add(x, v)
        v)

To write more efficient Fibonacci function, we can now call memoize and give it the function that performs the calculation as an argument:

let rec fibs = memoize (fun n ->
  if n < 1 then 1 else
  (fibs (n - 1)) + (fibs (n - 2)))

Note that this is a recursive value - the body of the function calls the memoized fibs function.

like image 99
Tomas Petricek Avatar answered Oct 13 '22 00:10

Tomas Petricek