I have some questions about the definition of the binding function (>>=)
in Haskell.
Because Haskell is a pure language, so we can use Monad to handle operations with side effects. I think this strategy is somewhat like putting all actions may cause side effects to another world, and we can control them from our "pure" haskell world though do
or >>=
.
So when I look at definition of >>=
function
(>>=) :: Monad m => m a -> (a -> m b) -> m b
it takes a (a -> m b)
function, so result m a
of the former action can be "unpack" to a non-monadic a
in >>=
. Then the function (a -> m b)
takes a
as its input and return another monadic m b
as its result. By the binding function I can operate on monadic without bringing any side effects into pure haskell codes.
My question is why we use a (a -> m b)
function? In my opinion, a m a -> m b
function can also do this. Is there any reason, or just because it is designed like this?
EDIT
From comments I understand it's hard to extract a
from m a
. However, I think I can consider a monadic m a
as a a
with side effect.
Is it possible to assume function m a -> m b
acts similar with a -> b
, so we can define m a -> m b
like defining a -> b
?
edit2: OK, here's what I should've said from the start:
E as in embedded domain-specific languages. Embedded means that the language's statements are plain values in our language, Haskell.
Let's try to have us an IO-language. Imagine we have print1 :: IO ()
primitive, describing an action of printing an integer 1
at the prompt. Imagine we also have print2 :: IO ()
. Both are plain Haskell values. In Haskell, we speak of these actions. This IO-language still needs to be interpreted / acted upon by some part of the run-time system later, at "run"-time. Having two languages, we have two worlds, two timelines.
We could write do { print1 ; print2 }
to describe compound actions. But we can't create a new primitive for printing 3
at the prompt, as it is outside our pure Haskell world. What we have here is an EDSL, but evidently not a very powerful one. We must have an infinite supply of primitives here; not a winning proposition. And it is not even a Functor, as we can't modify these values.
Now, what if we could? We'd then be able to tell do { print1 ; print2 ; fmap (1+) print2 }
, to print out 3
as well. Now it's a Functor. More powerful, still not flexible enough.
We get flexibility with primitives for constructing these action descriptors (like print1
). It is e.g. print :: Show a => a -> IO a
. We can now talk about more versatile actions, like do { print 42; getLine ; putStrLn ("Hello, " ++ "... you!") }
.
But now we see the need to refer to the "results" of previous actions. We want to be able to write do { print 42; s <- getLine ; putStrLn ("Hello, " ++ s ++ "!") }
. We want to create (in Haskell world) new action descriptions (Haskell values describing actions in IO-world) based on results (in Haskell world) of previous IO-actions that these IO-actions will produce, when they are run, when the IO-language is interpreted (the actions it describes been carried out in IO-world).
This means the ability to create those IO-language statements from Haskell values, like with print :: a -> IO a
. And that is exactly the type you're asking about, and it is what makes this EDSL a Monad.
Imagine we have an IO primitive (a_primitive :: IO Int -> IO ()
) which prints any positive integer as is, and prints "---"
on a separate line before printing any non-positive integer. Then we could write a_primitive (return 1)
, as you suggest.
But IO is closed; it is impure; we can't write new IO primitives in Haskell, and there can't be a primitive already defined for every new idea that might come into our minds. So we write (\x -> if x > 0 then print x else do { putStrln "---"; print x })
instead, and that lambda expression's type is Int -> IO ()
(more or less).
If the argument x
in the above lambda-expression were of type IO Int
the expression x > 0
would be mistyped. There is no way to get that a
out of IO a
without the use of the standard >>=
operator (or its equivalent).
see also:
And, this quote:
"Someone at some point noticed, "oh, in order to get impure effects from pure code I need to do metaprogramming, which means one of my types needs to be 'programs which compute an X'. I want to take a 'program that computes an X' and a function which takes an X and produces the next program, a 'program that computes a Y', and somehow glue them together into a 'program which computes a Y' " (which is the
bind
operation). The IO monad was born."
edit: These are the four types of generalized function application:
( $ ) :: (a -> b) -> a -> b -- plain
(<$>) :: Functor f => (a -> b) -> f a -> f b -- functorial
(<*>) :: Applicative f => f (a -> b) -> f a -> f b -- applicative
(=<<) :: Monad f => (a -> f b) -> f a -> f b -- monadic
And here are the corresponding type derivation rules, with the flipped arguments order for clarity,
a f a f a f a
a -> b a -> b f (a -> b) a -> f b
------ -------- ---------- ----------
b f b f b f b
no `f`s one `f` two `f`s, two `f`s:
both known one known,
one constructed
Why? They just are. Your question is really, why do we need Monads? Why Functors or Applicative Functors aren't enough? And this was surely already asked and answered many times (e.g., the 2nd link in the list just above). For one, as I tried to show above, monads let us code new computations in Haskell.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With