I'm just starting with Parsec (having little experience in Haskell), and I'm a little confused about using monads or applicatives. The overall feel I had after reading "Real World Haskell", "Write You a Haskell" and a question here is that applicatives are preferred, but really I have no idea.
So my questions are:
It might be worth paying attention to the key semantic difference between Applicative
and Monad
, in order to determine when each is appropriate. Compare types:
(<*>) :: m (s -> t) -> m s -> m t
(>>=) :: m s -> (s -> m t) -> m t
To deploy <*>
, you choose two computations, one of a function, the other of an argument, then their values are combined by application. To deploy >>=
, you choose one computation, and you explain how you will make use of its resulting values to choose the next computation. It is the difference between "batch mode" and "interactive" operation.
When it comes to parsing, Applicative
(extended with failure and choice to give Alternative
) captures the context-free aspects of your grammar. You will need the extra power that Monad
gives you only if you need to inspect the parse tree from part of your input in order to decide what grammar you should use for another part of your input. E.g., you might read a format descriptor, then an input in that format. Minimizing your usage of the extra power of monads tells you which value-dependencies are essential.
Shifting from parsing to parallelism, this idea of using >>=
only for essential value-dependency buys you clarity about opportunities to spread load. When two computations are combined with <*>
, neither need wait for the other. Applicative-when-you-can-but-monadic-when-you-must is the formula for speed. The point of ApplicativeDo
is to automate the dependency analysis of code which has been written in monadic style and thus accidentally oversequentialised.
Your question also relates to coding style, about which opinions are free to differ. But let me tell you a story. I came to Haskell from Standard ML, where I was used to writing programs in direct style even if they did naughty things like throw exceptions or mutate references. What was I doing in ML? Working on an implementation of an ultra-pure type theory (which may not be named, for legal reasons). When working in that type theory, I couldn't write direct-style programs which used exceptions, but I cooked up the applicative combinators as a way of getting as close to direct style as possible.
When I moved to Haskell, I was horrified to discover the extent to which people seemed to think that programming in pseudo-imperative do-notation was just punishment for the slightest semantic impurity (apart, of course, from non-termination). I adopted the applicative combinators as a style choice (and went even closer to direct style with "idiom brackets") long before I had a grasp of the semantic distinction, i.e., that they represented a useful weakening of the monad interface. I just didn't (and still don't) like the way do-notation requires fragmentation of expression structure and the gratuitous naming of things.
That's to say, the same things that make functional code more compact and readable than imperative code also make applicative style more compact and readable than do-notation. I appreciate that ApplicativeDo
is a great way to make more applicative (and in some cases that means faster) programs that were written in monadic style that you haven't the time to refactor. But otherwise, I'd argue applicative-when-you-can-but-monadic-when-you-must is also the better way to see what's going on.
In general, start with whatever makes the most sense to you. Afterwards consider the following.
It is good practice to use Applicative
(or even Functor
) when possible. It is in general easier for a compiler like GHC to optimize these instances since they can be simpler than Monad
. I think the general community advice post-AMP has been to make as general as possible your constraints. I would recommend the use of the GHC extension ApplicativeDo
since you can uniformly use do
notation while only getting an Applicative
constraint when that is all that is needed.
Since the ParsecT
parser type is an instance of both Applicative
and Monad
, you can mix and match the two. There are situations where doing this is more readable - this all depends on the situation.
Also, consider using megaparsec
. megaparsec
is a more actively maintained just generally cleaner more recent fork of parsec
.
Two things that, rereading my answer and the comments, I really didn't do a good job of clarifying:
the main benefit of using Applicative
is that for many types it admits much more efficient implementations (eg. (<*>)
is more performant than ap
).
If you just want to write something like (+) <$> parseNumber <*> parseNumber
, there is no need to drop into ApplicativeDo
- it would be just more verbose. I'd use ApplicativeDo
only when you start finding yourself writing very long or nested applicative expressions.
Following on from @pigworker (I'm too new on here to comment alas) it's worth noting join $ fM <*> ... <*> ... <*> ...
as a pattern as well. It nets you a "bind1, bind2, bind3..." family the same way <$>
and <*>
get you an "fmap1,fmap2,fmap3" one.
As a stylistic thing, when you're used enough to the combinators it's possible to use do
much the same as you would use let
: as a way to highlight when you wanted to name something. I tend to want to name things more often in parsers for example, because that probably corresponds to names in the spec!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With