From a gentle introduction to Haskell, there are the following monad laws. Can anyone intuitively explain what they mean?
return a >>= k = k a m >>= return = m xs >>= return . f = fmap f xs m >>= (\x -> k x >>= h) = (m >>= k) >>= h
Here is my attempted explanation:
We expect the return function to wrap a
so that its monadic nature is trivial. When we bind it to a function, there are no monadic effects, it should just pass a
to the function.
The unwrapped output of m
is passed to return
that rewraps it. The monadic nature remains the same. So it is the same as the original monad.
The unwrapped value is passed to f
then rewrapped. The monadic nature remains the same. This is the behavior expected when we transform a normal function into a monadic function.
I don't have an explanation for this law. This does say that the monad must be "almost associative" though.
Your descriptions seem pretty good. Generally people speak of three monad laws, which you have as 1, 2, and 4. Your third law is slightly different, and I'll get to that later.
For the three monad laws, I find it much easier to get an intuitive understanding of what they mean when they're re-written using Kleisli composition:
-- defined in Control.Monad (>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c mf >=> n = \x -> mf x >>= n
Now the laws can be written as:
1) return >=> mf = mf -- left identity 2) mf >=> return = mf -- right identity 4) (f >=> g) >=> h = f >=> (g >=> h) -- associativity
1) Left Identity Law - returning a value doesn't change the value and doesn't do anything in the monad.
2) Right Identity Law - returning a value doesn't change the value and doesn't do anything in the monad.
4) Associativity - monadic composition is associative (I like KennyTM's answer for this)
The two identity laws basically say the same thing, but they're both necessary because return
should have identity behavior on both sides of the bind operator.
Now for the third law. This law essentially says that both the Functor instance and your Monad instance behave the same way when lifting a function into the monad, and that neither does anything monadic. If I'm not mistaken, it's the case that when a monad obeys the other three laws and the Functor instance obeys the functor laws, then this statement will always be true.
A lot of this comes from the Haskell Wiki. The Typeclassopedia is a good reference too.
No disagreements with the other answers, but it might help to think of the monad laws as actually describing two sets of properties. As John says, the third law you mention is slightly different, but here's how the others can be split apart:
As in John's answer, what's called a Kleisli arrow for a monad is a function with type a -> m b
. Think of return
as id
and (<=<)
as (.)
, and the monad laws are the translations of these:
id . f
is equivalent to f
f . id
is equivalent to f
(f . g) . h
is equivalent to f . (g . h)
For the most part, you can think of the extra monadic structure as a sequence of extra behaviors associated with a monadic value; e.g. Maybe
being "give up" for Nothing
and "keep going" for Just
. Combining two monadic actions then essentially concatenates the sequences of behaviors they held.
In this sense, return
is again an identity--the null action, akin to an empty list of behaviors--and (>=>)
is concatenation. So, the monad laws are translations of these:
[] ++ xs
is equivalent to xs
xs ++ []
is equivalent to xs
(xs ++ ys) ++ zs
is equivalent to xs ++ (ys ++ zs)
These three laws describe a ridiculously common pattern, which Haskell unfortunately can't quite express in full generality. If you're interested, Control.Category
gives a generalization of "things that look like function composition", while Data.Monoid
generalizes the latter case where no type parameters are involved.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With