I have been experimenting with the strict and lazy ST monads, and I do not understand clearly the degree of laziness of each.
For example, using the lazy Control.Monad.State.Lazy monad we can write:
main = print $ (flip evalState) "a" $ do
forever $ put "b"
put "c"
get
This works fine and outputs "c". Dually, the same code for the strict Control.Monad.State.Strict variant will run put "b" forever, and hang.
Intuitively, I would expect the same duality to hold for the ST monads. That is, given the code:
main = print $ S.runST $ do
r <- newSTRef "a"
forever $ writeSTRef r "b"
writeSTRef r "c"
readSTRef r
Control.Monad.ST.Lazy should output "c", while Control.Monad.ST.Strict should hang. However, they both loop indefinitely. I believe that there is a valid reason for this, like: reading backwards, the reference r is not yet allocated at the time that the last writeSTRef is called. But it somehow feels like we could do better.
How lazy is the lazy
Control.Monad.ST.Lazymonad?
Surprisingly, it is perfectly lazy. But Data.STRef.Lazy isn't.
ST.Lazy is lazyLets focus on another example for a second:
import qualified Control.Monad.ST as S
import qualified Control.Monad.ST.Lazy as L
squared :: Monad m => m [Integer]
squared = mapM (return . (^2)) [1..]
ok, oops :: [Integer]
ok = L.runST squared
oops = S.runST squared
Even though ok and oops should do the same, we can get only the elements of ok. If we would try to use head oops, we would fail. However, concerning ok, we can take arbitrarily many elements.
Or, to compare them to the non-monadic squared list, they behave like:
ok, oops :: [Integer]
ok' = map (^2) [1..]
oops' = let x = map (^2) [1..] in force x -- Control.DeepSeq.force
That's because the strict version evaluates all state operations, even though they're not required for our result. On the other hand, the lazy version delays the operations:
This module presents an identical interface to Control.Monad.ST, except that the monad delays evaluation of state operations until a value depending on them is required.
readSTRef?Now lets focus again on your example. Note that we can get an infinite loop with even simpler code:
main = print $ L.runST $ do
forever $ return ()
r <- newSTRef "a"
readSTRef r
If we add an additional return at the end …
main = print $ L.runST $ do
forever $ return ()
r <- newSTRef "a"
readSTRef r
return "a"
… everything is fine. So apparently there's something strict in newSTRef or readSTRef. Lets have a look at their implementation:
import qualified Data.STRef as ST
newSTRef = strictToLazyST . ST.newSTRef
readSTRef = strictToLazyST . ST.readSTRef
writeSTRef r a = strictToLazyST (ST.writeSTRef r a)
And there's the culprit. Data.STRef.Lazy is actually implemented via Data.STRef, which is meant for Control.Monad.ST.Strict. strictToLazyST only hides this detail:
strictToLazyST :: ST.ST s a -> ST s a strictToLazyST m = ST $ \s ->Convert a strict ST computation into a lazy one. The strict state thread passed to
strictToLazySTis not performed until the result of the lazy state thread it returns is demanded.
Now lets put things together:
main, we want to print the value given by the lazy ST computationST computation's value is given by a lazy readSTRef
readSTRef is actually implemented as a lazy wrapper around the strict readSTRef
readSTRef evaluates the state as if it was a strict oneforever $ return () bites usSo the current ST.Lazy is lazy enough. It's the Data.STRef.Lazy that's too strict. As long as Data.STRef.Lazy is based on strictToLazyST, this behavior will endure.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With