For the function monad I find that (<*>)
and (>>=)
/(=<<)
have two strikingly similar types. In particular, (=<<)
makes the similarity more apparent:
(<*>) :: (r -> a -> b) -> (r -> a) -> (r -> b)
(=<<) :: (a -> r -> b) -> (r -> a) -> (r -> b)
So it's like both (<*>)
and (>>=)
/(=<<)
take a binary function and a unary function, and constrain one of the two arguments of the former to be determined from the other one, via the latter. After all, we know that for the function applicative/monad,
f <*> g = \x -> f x (g x)
f =<< g = \x -> f (g x) x
And they look so strikingly similar (or symmetric, if you want), that I can't help but think of the question in the title.
As regards monads being "more powerful" than applicative functors, in the hardcopy of LYAH's For a Few Monads More chapter, the following is stated:
[…]
join
cannot be implemented by just using the functions that functors and applicatives provide.
i.e. join
can't be implemented in terms of (<*>)
, pure
, and fmap
.
But what about the function applicative/mondad I mentioned above?
I know that join === (>>= id)
, and that for the function monad that boils down to \f x -> f x x
, i.e. a binary function is made unary by feeding the one argument of the latter as both arguments of the former.
Can I express it in terms of (<*>)
? Well, actually I think I can: isn't flip ($) <*> f === join f
correct? Isn't flip ($) <*> f
an implementation of join
which does without (>>=)
/(=<<)
and return
?
However, thinking about the list applicative/monad, I can express join
without explicitly using (=<<)
/(>>=)
and return
(and not even (<*>)
, fwiw): join = concat
; so probably also the implementation join f = flip ($) <*> f
is kind of a trick that doesn't really show if I'm relying just on Applicative
or also on Monad
.
When you implement join
like that, you're using knowledge of the function type beyond what Applicative
gives you. This knowledge is encoded in the use of ($)
. That's the "application" operator, which is the core of what a function even is. Same thing happens with your list example: you're using concat
, which is based on knowledge of the nature of lists.
In general, if you can use the knowledge of a particular monad, you can express computations of any power. For example, with Maybe
you can just match on its constructors and express anything that way. When LYAH says that monad is more powerful than applicative, it means "as abstractions", not applied to any particular monad.
edit2: The problem with the question is that it is vague. It uses a notion ("more powerful") that is not defined at all and leaves the reader guessing as to its meaning. Thus we can only get meaningless answers. Of course anything can be coded while using all arsenal of Haskell at our disposal. This is a vacuous statement. And it wasn't the question.
The cleared-up question, as far as I can see, is: using the methods from Monad / Applicative / Functor respectively as primitives, without using explicit pattern matching at all, is the class of computations that can be thus expressed strictly larger for one or the other set of primitives in use. Now this can be meaningfully answered.
Functions are opaque though. No pattern matching is present anyway. Without restricting what we can use, again there's no meaning to the question. The restriction then becomes, the explicit use of named arguments, the pointful style of programming, so that we only allow ourselves to code in combinatory style.
So then, for lists, with fmap
and app
(<*>
) only, there's so much computations we can express, and adding join
to our arsenal does make that larger. Not so with functions. join = W = CSI = flip app id
. The end.
Having implemented app f g x = (f x) (g x) = id (f x) (g x) :: (->) r (a->b) -> (->) r a -> (->) r b
, I already have flip app id :: (->) r (r->b) -> (->) r b
, I might as well call it join
since the type fits. It already exists whether I wrote it or not. On the other hand, from app fs xs :: [] (a->b) -> [] a -> [] b
, I can't seem to get [] ([] b) -> [] b
. Both ->
s in (->) r (a->b)
are the same; functions are special.
(BTW, I don't see at the moment how to code the lists' app
explicitly without actually coding up join
as well. Using list comprehensions is equivalent to using concat
; and concat
is not implementation of join
, it is join
).
join f = f <*> id
is simple enough so there's no doubts.
(edit: well, apparently there were still doubts).
(=<<) = (<*>) . flip
for functions. that's it. that's what it means that for functions Monad and Applicative Functor are the same. flip
is a generally applicable combinator. concat
isn't. There's a certain conflation there, with functions, sure. But there's no specific functions-manipulating function there (like concat
is a specific lists-manipulating function), or anywhere, because functions are opaque.
Seen as a particular data type, it can be subjected to pattern matching. As a Monad though it only knows about >>=
and return
. concat
does use pattern matching to do its work. id
does not.
id
here is akin to lists' []
, not concat
. That it works is precisely what it means that functions seen as Applicative Functor or Monad are the same. Of course in general Monad has more power than Applicative, but that wasn't the question. If you could express join
for lists with <*>
and []
, I'd say it'd mean they have the same power for lists as well.
In (=<<) = (<*>) . flip
, both flip
and (.)
do nothing to the functions they get applied to. So they have no knowledge of those functions' internals. Like, foo = foldr (\x acc -> x+1) 0
will happen to correctly calculate the length of the argument list if that list were e.g. [1,2]
. Saying this, building on this, is using some internal knowledge of the function foo
(same as concat
using internal knowledge of its argument lists, through pattern matching). But just using basic combinators like flip
and (.)
etc., isn't.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With