Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is Haskell monadic bind left-associative?

Tags:

haskell

monads

The >>= and >> operators are both infixl 1. Why the left-associativity?

In particular, I observe the equivalences:

(do a; b; c ) == (a >> (b >> c))   -- Do desugaring
(a >> b >> c) == ((a >> b) >> c)   -- Fixity definition

So do is desugared differently to how the fixity definition naturally works, which is surprising.

like image 840
Neil Mitchell Avatar asked Mar 24 '18 21:03

Neil Mitchell


People also ask

Are monads associative?

And because monoids are associative, fold is "embarassingly parallel:" we can split the list into any number of groups, fold each group, in a separate thread, and then fold the results. Now the only missing piece is endofunctors. Once we understand endofunctors, we'll have a complete picture of what a monad is.

What does bind do Monad?

A Monad consists of two interrelated functions: bind and unit. Bind takes a non-composable function f and returns a new function g that accepts the monadic type as input and returns the monadic type. g is composable. The unit function takes an argument of the type that f expected, and wraps it in the monadic type.

What are monads Haskell?

What is a Monad? A monad is an algebraic structure in category theory, and in Haskell it is used to describe computations as sequences of steps, and to handle side effects such as state and IO. Monads are abstract, and they have many useful concrete instances. Monads provide a way to structure a program.


Video Answer


2 Answers

>>= must surely be left-associative.

Prelude> ["bla","bli di","blub"] >>= words >>= reverse
"albilbidbulb"
Prelude> ["bla","bli di","blub"] >>= (words >>= reverse)

<interactive>:3:30: error:
    • Couldn't match expected type ‘[[b0]]’
                  with actual type ‘String -> [String]’
    • Probable cause: ‘words’ is applied to too few arguments
      In the first argument of ‘(>>=)’, namely ‘words’
      In the second argument of ‘(>>=)’, namely ‘(words >>= reverse)’
      In the expression:
        ["bla", "bli di", "blub"] >>= (words >>= reverse)

And >> pretty much follows >>=; if it had another fixity it would not only feel weird as Lennart said, it would also prevent you from using both operators in a chain:

Prelude> ["bla","bli di","blub"] >>= words >> "Ha"
"HaHaHaHa"
Prelude> infixr 1 ⬿≫; (⬿≫) = (>>)
Prelude> ["bla","bli di","blub"] >>= words ⬿≫ "Ha"

<interactive>:6:1: error:
    Precedence parsing error
        cannot mix ‘>>=’ [infixl 1] and ‘⬿≫’ [infixr 1] in the same infix expression
like image 60
leftaroundabout Avatar answered Oct 10 '22 01:10

leftaroundabout


>>= is left-associative because it's convenient. We want m >>= f1 >>= f2 to be parsed as (m >>= f1) >>= f2, not as m >>= (f1 >>= f2), which would likely not type check, as pointed out in the comments.

The associativity of >> however, is simply a mirror of >>=. This is likely for the sake of consistency, since we can prove that >> is associative via the third monad law: (m >>= f) >>= g ≡ m >>= ( \x -> f x >>= g ). That is to say, its associativity doesn't theoretically matter. Here is the proof:

-- Definition:
a >> b ≡ a >>= (\_ -> b)

-- Proof: (a >> b) >> c ≡ a >> (b >> c)
  (a >> b) >> c
≡ (a >>= (\_ -> b)) >> c                  -- [Definition]
≡ (a >>= (\_ -> b)) >>= (\_ -> c)         -- [Definition]
≡ a >>= (\x -> (\_ -> b) x >>= (\_ -> c)) -- [Monad law]
≡ a >>= (\_ -> b >>= (\_ -> c))           -- [Beta-reduction]
≡ a >>= (\_ -> b >> c)                    -- [Definition]
≡ a >> (b >> c)                           -- [Definition]
∎

do-notation de-sugars differently because it has a different goal. Essentially, since do-notation is essentially writing out a lambda, right-association is needed. This is because m >>= (\v -> (...)) is written as do {v <- m; (...)}. As earlier, the de-sugaring of >> here seems to follow >>= for the sake of consistency.

like image 31
AJF Avatar answered Oct 09 '22 23:10

AJF