The monad laws are traditionally described in terms of >>=
and pure
:
pure a >>= k = k a m >>= pure = m m >>= (\x -> k x >>= h) = (m >>= k) >>= h
However, monads can also be defined in terms of join
instead of >>=
. I would like to come up with a formulation of the monad laws in terms of join
.
Using x >>= f = join (fmap f x)
, it’s easy to rewrite the existing monad laws to eliminate >>=
. Simplifying the results slightly with the help of the applicative laws, the first two laws are quite pleasantly expressed:
join . pure = id join . fmap pure = id
The intuition for these laws is easy, too, since clearly, introducing an extra “layer” with pure
should be a no-op when combined with join
. The third law, however, is not nearly so nice. It ends up looking like this:
join (fmap (\x -> join (fmap h (k x))) m) = join (fmap h (join (fmap k m)))
This does not pleasantly reduce using the applicative laws, and it’s much harder to understand without staring at it for a while. It certainly doesn’t have the same easy intuition.
Is there an equivalent, alternative formulation of the monad laws in terms of join
that is easier to understand? Alternatively, is there any way to simplify the above law, or to make it easier to grok? The version with >>=
is already less nice than the one expressed with Kleisli composition, but the version with join
is nearly unreadable.
There are three laws of monads, namely the left identity, right identity and associativity.
A monadic function is a function with a single argument, written to its right. It is one of three possible function valences; the other two are dyadic and niladic. The term prefix function is used outside of APL to describe APL's monadic function syntax.
A monad is an algebraic structure in category theory, and in Haskell it is used to describe computations as sequences of steps, and to handle side effects such as state and IO. Monads are abstract, and they have many useful concrete instances. Monads provide a way to structure a program.
And because monoids are associative, fold is "embarassingly parallel:" we can split the list into any number of groups, fold each group, in a separate thread, and then fold the results. Now the only missing piece is endofunctors. Once we understand endofunctors, we'll have a complete picture of what a monad is.
Stolen straight from Wikipedia:
(The natural transformation η: 1 -> T
is pure
; the natural transformation µ: T^2 -> T
is join
)
µ . Tµ = µ . µT
In Haskell:
join . fmap join = join . join
In English: If you start with three layers of monad as mmma :: Monad m => m (m (m a))
, it doesn't matter if you flatten it inner layer first and then outer, or flatten it outer layer first and then inner. This is the same law as what you listed as the third (associativity).
µ . Tη = µ . ηT = 1
In Haskell:
join . fmap pure = join . pure = id
In English: If you start with one layer of monad as ma :: Monad m => m a
, it doesn't matter if you create a new layer inside it and then flatten it, or if you create a new layer outside it and then flatten it, and both are the same as doing nothing at all. This law is a combination of your first two.
Also, join
being a natural transformation means that
join . fmap (fmap f) = fmap f . join
because of parametricity.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With