Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Optimising a recursive brute force into a more mathematical/linear solution

I've written this Haskell program to solve Euler 15 (it uses some very simple dynamic programming to run a tad faster, so I can actually run it, but removing that you would expect it to run in O(2^n).

-- Starting in the top left corner of a 2×2 grid, and only being able to move to
-- the right and down, there are exactly 6 routes to the bottom right corner.
--
-- How many such routes are there through a 20×20 grid?

calcPaths :: Int -> Integer
calcPaths s
 = let  go x y
          | x == 0 || y == 0    = 1
          | x == y              = 2 * go x (y - 1)
          | otherwise           = go (x - 1) y + go x (y - 1)
   in   go s s

I later realised this could be done in O(n) by transforming it into an equation and upon thinking of it a little longer, realised it's actually quite similar to my solution above, except the recursion (which is slow on our hardware) is replaced by mathematics representing the same thing (which is fast on our hardware).

equivalent equation

Is there a systematic way to perform this kind of optimisation (produce and prove an equation to match a recursion) on recursive sets of expressions, specifically one which could be realistically "taught" to a compiler so this reduction is done automatically?

like image 216
kvanbere Avatar asked Feb 26 '14 10:02

kvanbere


2 Answers

Unfortunately I can't say much about analytical algorithmic optimizations, but in practice there is a useful technique for dynamic programming named memoization. For example, with Memoize library your code can be rewritten as

import Data.Function.Memoize

calcPaths s
 = let go f x y
          | x == 0 || y == 0    = 1
          | x == y              = 2 * f x (y - 1)
          | otherwise           = f (x - 1) y + f x (y - 1)
   in memoFix2 go s s

so the go function will be calculated only once for any combination of arguments.

like image 123
Yuuri Avatar answered Oct 23 '22 10:10

Yuuri


You can also use dynamic programming if the problem is divisible into smaller subproblems for e.g

F(x,y) = F(x-1,y) + F(x,y-1)

here F(x,y) is divisible into smaller sub-problems hence DP can be used

int arr[xmax+1][ymax+1];

//base conditions 

for(int i=0;i<=xmax;i++)
 arr[i][0] = 1

for(int j=0;j<=ymax;j++)
 arr[0][j] = 1;

// main equation

for(int i=1;i<=xmax;i++) {

  for(int j=1;j<=ymax;j++) {

     arr[i][j] = arr[i-1][j] + arr[i][j-1];
  }
} 

As you mentioned compiler optimization DP can be used to do so you just need to write instructions in compiler which when given a recursive solution check if it is divisible into sub-problems of smaller sizes if so then use DP with simple for loop build up like above but the most difficult part is optimizing it automatically for example above DP needs O(xmax*ymax) space but can be easily optimized for getting solution in O(xmax+ymax) space

Example solver :- http://www.cs.unipr.it/purrs/

like image 43
Vikram Bhat Avatar answered Oct 23 '22 10:10

Vikram Bhat