I used to ask a similar question once. Now I'll be more specific. The purpose is to learn a Haskell idiom to write iterative algorithms with monadic results. In particular, this might be useful for implementing all kinds of randomized algorithms, such as genetic algorithms and a like.
I wrote an example program that manifests my problem with such algorithms in Haskell. Its complete source is on hpaste.
The key point is to update an element randomly (thus the result is in State StdGen
or some other monad):
type RMonad = State StdGen
-- An example of random iteration step: one-dimensional random walk.
randStep :: (Num a) => a -> RMonad a
randStep x = do
rnd <- get
let (goRight,rnd') = random rnd :: (Bool, StdGen)
put rnd'
if goRight
then return (x+1)
else return (x-1)
And then one needs to update many elements, and repeat the process many, many times. And here is a problem. As every step is a monad action (:: a -> m a
), repeated many times, it's important to compose such actions effectively (forgetting the previous step quickly). From what I learned from my previous quesion (Composing monad actions with folds), seq
and deepseq
help a lot to compose monadic actions. So I do:
-- Strict (?) iteration.
iterateM' :: (NFData a, Monad m) => Int -> (a -> m a) -> a -> m a
iterateM' 0 _ x = return $!! x
iterateM' n f x = (f $!! x) >>= iterateM' (n-1) f
-- Deeply stict function application.
($!!) :: (NFData a) => (a -> b) -> a -> b
f $!! x = x `deepseq` f x
It is certainly better than lazy composition. Unfortunately, it is not enough.
-- main seems to run in O(size*iters^2) time...
main :: IO ()
main = do
(size:iters:_) <- liftM (map read) getArgs
let start = take size $ repeat 0
rnd <- getStdGen
let end = flip evalState rnd $ iterateM' iters (mapM randStep) start
putStr . unlines $ histogram "%.2g" end 13
When I measured time required to finish this program, it appears, that it is similar to O(N^2) with respect to the number of iterations (memory allocation seems to be acceptable). This profile should be flat and constant for linear asymptotics:
And this is how a heap profile looks:
I assume that such a program should run with very modest memory requirements, and it should take time proportional to the number of iterations. How can I achieve that in Haskell?
The complete runnable source of the example is here.
Some things to consider:
For raw all-out performance, write a custom State monad, like so:
import System.Random.Mersenne.Pure64
data R a = R !a {-# UNPACK #-}!PureMT
-- | The RMonad is just a specific instance of the State monad where the
-- state is just the PureMT PRNG state.
--
-- * Specialized to a known state type
--
newtype RMonad a = S { runState :: PureMT -> R a }
instance Monad RMonad where
{-# INLINE return #-}
return a = S $ \s -> R a s
{-# INLINE (>>=) #-}
m >>= k = S $ \s -> case runState m s of
R a s' -> runState (k a) s'
{-# INLINE (>>) #-}
m >> k = S $ \s -> case runState m s of
R _ s' -> runState k s'
-- | Run function for the Rmonad.
runRmonad :: RMonad a -> PureMT -> R a
runRmonad (S m) s = m s
evalRmonad :: RMonad a -> PureMT -> a
evalRmonad r s = case runRmonad r s of R x _ -> x
-- An example of random iteration step: one-dimensional random walk.
randStep :: (Num a) => a -> RMonad a
randStep x = S $ \s -> case randomInt s of
(n, s') | n < 0 -> R (x+1) s'
| otherwise -> R (x-1) s'
Like so: http://hpaste.org/fastcgi/hpaste.fcgi/view?id=27414#a27414
Which runs in constant space (modulo the [Double]
you build up), and is some 8x faster than your original.
The use of a specialized state monad with local defintion outperforms the Control.Monad.Strict significantly as well.
Here's what the heap looks like, with the same paramters as you:
Note that it is about 10x faster, and uses 1/5th the space. The big red thing is your list of doubles being allocated.
Inspired by your question, I captured the PureMT pattern in a new package: monad-mersenne-random, and now your program becomes this:
The other change I made was to worker/wrapper transform iterateM, enabling it to be inlined:
{-# INLINE iterateM #-}
iterateM n f x = go n x
where
go 0 !x = return x
go n !x = f x >>= go (n-1)
Overall, this brings your code from, with K=500, N=30k
So that is, 220x faster.
The heap is a bit better too, now that iterateM unboxes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With