Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

edit distance algorithm in Haskell - performance tuning

I'm trying to implement the levenshtein distance (or edit distance) in Haskell, but its performance decreases rapidly when the string lenght increases.

I'm still quite new to Haskell, so it would be nice if you could give me some advice on how I could improve the algorithm. I already tried to "precompute" values (the inits), but since it didn't change anything I reverted that change.

I know there's already an editDistance implementation on Hackage, but I need it to work on lists of arbitrary tokens, not necessarily strings. Also, I find it a bit complicated, at least compared to my version.

So, here's the code:

-- standard levenshtein distance between two lists
editDistance      :: Eq a => [a] -> [a] -> Int
editDistance s1 s2 = editDistance' 1 1 1 s1 s2 

-- weighted levenshtein distance
-- ins, sub and del are the costs for the various operations
editDistance'      :: Eq a => Int -> Int -> Int -> [a] -> [a] -> Int
editDistance' _ _ ins s1 [] = ins * length s1 
editDistance' _ _ ins [] s2 = ins * length s2 
editDistance' del sub ins s1 s2  
    | last s1 == last s2 = editDistance' del sub ins (init s1) (init s2)
    | otherwise          = minimum [ editDistance' del sub ins s1 (init s2)        + del -- deletion 
                                   , editDistance' del sub ins (init s1) (init s2) + sub -- substitution
                                   , editDistance' del sub ins (init s1) s2        + ins -- insertion
                                   ]

It seems to be a correct implementation, at least it gives exactly the same results as this online tool.

Thanks in advance for your help! If you need any additional information, please let me know.

Greetings, bzn

like image 445
bzn Avatar asked Apr 01 '11 14:04

bzn


3 Answers

Ignoring that this is a bad algorithm (should be memoizing, I get to that second)...

Use O(1) Primitives and not O(n)

One problem is you use a whole bunch calls that are O(n) for lists (haskell lists are singly linked lists). A better data structure would give you O(1) operations, I used Vector:

import qualified Data.Vector as V

-- standard levenshtein distance between two lists
editDistance      :: Eq a => [a] -> [a] -> Int
editDistance s1 s2 = editDistance' 1 1 1 (V.fromList s1) (V.fromList s2)

-- weighted levenshtein distance
-- ins, sub and del are the costs for the various operations
editDistance'      :: Eq a => Int -> Int -> Int -> V.Vector a -> V.Vector a -> Int
editDistance' del sub ins s1 s2
  | V.null s2 = ins * V.length s1
  | V.null s1 = ins * V.length s2
  | V.last s1 == V.last s2 = editDistance' del sub ins (V.init s1) (V.init s2)
  | otherwise            = minimum [ editDistance' del sub ins s1 (V.init s2)        + del -- deletion 
                                   , editDistance' del sub ins (V.init s1) (V.init s2) + sub -- substitution
                                   , editDistance' del sub ins (V.init s1) s2        + ins -- insertion
                                   ]

The operations that are O(n) for lists include init, length, and last (though init is able to be lazy at least). All these operations are O(1) using Vector.

While real benchmarking should use Criterion, a quick and dirty benchmark:

str2 = replicate 15 'a' ++ replicate 25 'b'
str1 = replicate 20 'a' ++ replicate 20 'b'
main = print $ editDistance str1 str2

shows the vector version takes 0.09 seconds while strings take 1.6 seconds, so we saved about an order of magnitude without even looking at your editDistance algorithm.

Now what about memoizing results?

The bigger issue is obviously the need for memoization. I took this as an opportunity to learn the monad-memo package - my god is that awesome! For one extra constraint (you need Ord a), you get a memoization basically for no effort. The code:

import qualified Data.Vector as V
import Control.Monad.Memo

-- standard levenshtein distance between two lists
editDistance      :: (Eq a, Ord a) => [a] -> [a] -> Int
editDistance s1 s2 = startEvalMemo $ editDistance' (1, 1, 1, (V.fromList s1), (V.fromList s2))

-- weighted levenshtein distance
-- ins, sub and del are the costs for the various operations
editDistance' :: (MonadMemo (Int, Int, Int, V.Vector a, V.Vector a) Int m, Eq a) => (Int, Int, Int, V.Vector a, V.Vector a) -> m Int
editDistance' (del, sub, ins, s1, s2)
  | V.null s2 = return $ ins * V.length s1
  | V.null s1 = return $ ins * V.length s2
  | V.last s1 == V.last s2 = memo editDistance' (del, sub, ins, (V.init s1), (V.init s2))
  | otherwise = do
        r1 <- memo editDistance' (del, sub, ins, s1, (V.init s2))
        r2 <- memo editDistance' (del, sub, ins, (V.init s1), (V.init s2))
        r3 <- memo editDistance' (del, sub, ins, (V.init s1), s2)
        return $ minimum [ r1 + del -- deletion 
                         , r2 + sub -- substitution
                         , r3 + ins -- insertion
                                   ]

You see how the memoization needs a single "key" (see the MonadMemo class)? I packaged all the arguments as a big ugly tuple. It also needs one "value", which is your resulting Int. Then it's just plug and play using the "memo" function for the values you want to memoize.

For benchmarking I used a shorter, but larger-distance, string:

$ time ./so  # the memoized vector version
12

real    0m0.003s

$ time ./so3  # the non-memoized vector version
12

real    1m33.122s

Don't even think about running the non-memoized string version, I figure it would take around 15 minutes at a minimum. As for me, I now love monad-memo - thanks for the package Eduard!

EDIT: The difference between String and Vector isn't as much in the memoized version, but still grows to a factor of 2 when the distance gets to around 200, so still worth while.

EDIT: Perhaps I should explain why the bigger issue is "obviously" memoizing results. Well, if you look at the heart of the original algorithm:

 [ editDistance' ... s1          (V.init s2)  + del 
 , editDistance' ... (V.init s1) (V.init s2) + sub
 , editDistance' ... (V.init s1) s2          + ins]

It's quite clear a call of editDistance' s1 s2 results in 3 calls to editDistance'... each of which call editDistance' three more times... and three more time... and AHHH! Exponential explosion! Luckly most the calls are identical! for example (using --> for "calls" and eD for editDistance'):

eD s1 s2  --> eD s1 (init s2)             -- The parent
            , eD (init s1) s2
            , eD (init s1) (init s2)
eD (init s1) s2 --> eD (init s1) (init s2)         -- The first "child"
                  , eD (init (init s1)) s2
                  , eD (init (init s1)) (init s2) 
eD s1 (init s2) --> eD s1 (init (init s2))
                  , eD (init s1) (init s2)
                  , eD (init s1) (init (init s2))

Just by considering the parent and two immediate children we can see the call ed (init s1) (init s2) is done three times. The other child share calls with the parent too and all children share many calls with each other (and their children, cue Monty Python skit).

It would be a fun, perhaps instructive, exercise to make a runMemo like function that returns the number of cached results used.

like image 51
Thomas M. DuBuisson Avatar answered Nov 18 '22 07:11

Thomas M. DuBuisson


You need to memoize editDistance'. There are many ways of doing this, e.g., a recursively defined array.

like image 41
augustss Avatar answered Nov 18 '22 09:11

augustss


As already mentioned, memoization is what you need. In addition you are looking at edit distance from right to left, wich isn't very efficient with strings, and edit distance is the same regardless of direction. That is: editDistance (reverse a) (reverse b) == editDistance a b

For solving the memoization part there are very many libraries that can help you. In my example below I chose MemoTrie since it is quite easy to use and performs well here.

import Data.MemoTrie(memo2)

editDistance' del sub ins = memf
  where
   memf = memo2 f
   f s1     []     = ins * length s1
   f []     s2     = ins * length s2
   f (x:xs) (y:ys)
     | x == y  = memf xs ys
     | otherwise = minimum [ del + memf xs (y:ys),
                             sub + memf (x:xs) ys,
                             ins + memf xs ys]

As you can see all you need is to add the memoization. The rest is the same except that we start from the beginning of the list in staid of the end.

like image 2
HaskellElephant Avatar answered Nov 18 '22 08:11

HaskellElephant