Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Right-tightening ArrowLoop law

According to the Control.Arrow documentation, for many monads (those for which the >>= operation is strict) the instance MonadFix m => ArrowLoop (Kleisli m) does not satisfy the right-tightening law (loop (f >>> first h) = loop f >>> h) required by the ArrowLoop class. Why is that so?

like image 276
themarketka Avatar asked Jan 16 '17 11:01

themarketka


1 Answers

This is multi-faceted question with several different angles, and it goes back to the value-recursion (mfix/mdo) in Haskell. See here for background information. I'll try to address the right-tightening issue here in some detail.

Right-tightening for mfix

Here's the right-tightening property for mfix:

mfix (λ(x, y). f x >>= λz. g z >>= λw. return (z, w))
    = mfix f >>= λz. g z >>= λw. return (z, w)

Here it is in picture form:

Right shrinking law, diagrammatically

The dotted-lines show where the "knot-tying" is happening. This is essentially the same law as mentioned in the question, except it is expressed in terms of mfix and value-recursion. As shown in Section 3.1 of this work, for a monad with a strict bind operator, you can always write an expression that distinguishes the left-hand side of this equation from the right hand-side, thus failing this property. (See below for an actual example in Haskell.)

When an arrow is created via the Kleisli construction from a monad with mfix, the corresponding loop operator fails the corresponding property in the same way.

Domain-theory and approximations

In domain theoretic terms, the mismatch will always be an approximation.

That is, the left hand-side will always be less defined than the right. (More precisely, the lhs will be lower than the rhs, in the domain of PCPOs, the typical domain we use for Haskell semantics.) In practice, this means that the right hand side will terminate more often, and is to be preferred when that is an issue. Again, see Section 3.1 of this for details.

In practice

This may sound all abstract, and in a certain sense it is. More intuitively, the left hand side gets a chance to act on the recursive value as it is being produced since g is inside the "loop," and thus is able to interfere with the fixed-point computation. Here's an actual Haskell program to illustrate:

import Control.Monad.Fix

f :: [Int] -> IO [Int]
f xs = return (1:xs)

g :: [Int] -> IO Int
g [x] = return x
g _   = return 1

lhs = mfix (\(x, y) -> f x >>= \z -> g z >>= \w -> return (z, w))
rhs = mfix f >>= \z -> g z >>= \w -> return (z, w)

If you evaluate the lhs it will never terminate, while rhs will give you the infinite list of 1's as expected:

*Main> :t lhs
lhs :: IO ([Int], Int)
*Main> lhs >>= \(xs, y) -> return (take 5 xs, y)
^CInterrupted.
*Main> rhs >>= \(xs, y) -> return (take 5 xs, y)
([1,1,1,1,1],1)

I interrupted the computation in the first case as it is non-terminating. While this is a contrived example, it is the simplest to illustrate the point. (See below for a rendering of this example using the mdo-notation, which might be easier to read.)

Example monads

Typical examples of monads that do not satisfy this law include Maybe, List, IO, or any other monad that's based on an algebraic type with multiple constructors. Typical examples of monads that do satisfy this law are State and Environment monads. See Section 4.10 for a table summarizing these results.

Pure-right shrinking

Note that a "weaker" form of right-tightening, where the function g in the above equation is pure, follows from value-recursion laws:

mfix (λ(x, y). f x >>= λz. return (z, h z))
  = mfix f >>= λz. return (z, h z)

This is the same law as before with, g = return . h. That is g cannot perform any effects. In this case, there is no way to distinguish the left-hand side from the right as you might expect; and the result indeed follows from value-recursion axioms. (See Section 2.6.3 for a proof.) The picture in this case looks like this:

Pure-right shrinking

This property follows from the sliding property, which is a version of dinaturality for value-recursion, and is known to be satisfied by many monads of interest: Section 2.4.

Impact on the mdo-notation

The failure of this law has an impact on how the mdo notation was designed in GHC. The translation includes the so called "segmentation" step precisely to avoid the failure of the right-shrinking law. Some people consider that a bit controversial as GHC automatically picks the segments, essentially applying the right-tightening law. If explicit control is needed, GHC provides the rec keyword to leave the decision to the users.

Using the mdo-notation and explicit do rec, the above example renders as follows:

{-# LANGUAGE RecursiveDo #-}

f :: [Int] -> IO [Int]
f xs = return (1:xs)

g :: [Int] -> IO Int
g [x] = return x
g _   = return 1


lhs :: IO ([Int], Int)
lhs = do rec x <- f x
             w <- g x
         return (x, w)

rhs :: IO ([Int], Int)
rhs = mdo x <- f x
          w <- g x
          return (x, w)

One might naively expect that lhs and rhs above should be the same, but due to the failure of the right-shrinking law, they are not. Just like before, lhs gets stuck, while rhs successfully produces the value:

*Main> lhs >>= \(x, y) -> return (take 5 x, y)
^CInterrupted.
*Main> rhs >>= \(x, y) -> return (take 5 x, y)
([1,1,1,1,1],1)

Visually inspecting the code, we see that the recursion is simply for the function f, which justifies the segmentation that is automatically performed by the mdo-notation.

If the rec notation is to be preferred, the programmer will need to put it in minimal blocks to ensure termination. For instance, the above expression for lhs should be written as follows:

lhs :: IO ([Int], Int)
lhs = do rec x <- f x
         w <- g x
         return (x, w)

The mdo-notation takes care of this and places the recursion over minimal blocks without user intervention.

Failure for Kleisli Arrows

After this lengthy detour, let us now go back to the original question about the corresponding law for arrows. Similar to the mfix case, we can construct a failing example for Kleisli arrows as well. In fact, the above example translates more or less directly:

{-# LANGUAGE Arrows #-}
import Control.Arrow

f :: Kleisli IO ([Int], [Int]) ([Int], [Int])
f = proc (_, ys) -> returnA -< (ys, 1:ys)

g :: Kleisli IO [Int] Int
g = proc xs -> case xs of
                 [x] -> returnA -< x
                 _   -> returnA -< 1

lhs, rhs :: Kleisli IO [Int] Int
lhs = loop (f >>> first g)
rhs = loop f >>> g

Just like in the case of mfix, we have:

*Main> runKleisli rhs []
1
*Main> runKleisli lhs []
^CInterrupted.

The failure of right-tightening for mfix of the IO-monad also prevents the Kleisli IO arrow from satisfying the right-tightening law in the ArrowLoop instance.

like image 130
alias Avatar answered Nov 05 '22 03:11

alias