Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Haskell: Would "do" notation be useful for contexts other than monads?

Tags:

syntax

haskell

We all love do, and I was curious if perhaps this sort of alternate syntax would theoretically be useful outside of the monad world. If so, what other sorts of computations would it simplify? Would it make sense to have something equivalent for Applicative, for example?

like image 532
J Cooper Avatar asked Jul 26 '10 15:07

J Cooper


2 Answers

My sense is that many Haskell programmers don't love do at all, and one of the common arguments in favor of using Applicative when you don't need the full power of Monad is that the combinators <$>, <*>, etc. allow a very clear and concise coding style.

Even for monadic code, many people prefer using =<< explicitly instead of do notation. camccann's answer to your previous question about <*> gives a fantastic argument for this preference.

I tend to write my first drafts using do and then replace with combinators as I revise. This is just a matter of my own (in)experience and taste: it's often easiest for me to sketch things out in a more imperative fashion (which is more convenient with do), but I think non-do code is usually prettier.

For arrows, on the other hand, I can't imagine not using proc and command do. The tuples just get so ugly so quickly.

like image 141
Travis Brown Avatar answered Sep 30 '22 18:09

Travis Brown


It might help to consider, regarding do notation itself, what it's actually good for. As Travis Brown points out, I've previously advocated the use of a "function application" style with Monads and related types, but there's a flip side to that as well: Some expressions simply can't be written cleanly in direct function application style. For instance, the following can quickly make applicative style clumsy:

  • Intermediate results used in multiple subexpressions, at different depths of nesting
  • Arguments to the outermost function used deeply-nested in subexpressions
  • Awkward or inconsistent argument order, i.e. needing to partially apply a function to something other than its first argument
  • Deeply embedded flow control based on intermediate results, with shared subexpressions between branches
  • Pattern matching on intermediate results, particularly in the case of extracting part of a result, using that for further computation, then reconstructing a modified version as the next result

Writing such a function as a single expression generally requires either multiple nested lambdas, or the kind of absurd obfuscating nonsense that gives point-free style a bad name. A do block, on the other hand, provides syntactic sugar for easy nested scoping of intermediate results with embedded control flow.

Normally you'd probably extract such subexpressions and put them in a where clause or something, but since ordinary values form a monad with function application as (>>=)--namely the Identity monad--you could conceivably write such a function in a do block instead, though people might look at you funny.


Besides the scoping/binding stuff, the other thing a do block does for you is elide the operator that chains subexpressions together. It's not too hard to imagine other cases where it would be nice to have a notation for "combine these expressions using this function within this block", and then let the compiler fill in the blanks.

In the easy case, where the expressions all have the same type, putting them in a list and then folding it works well--building strings in this manner using unwords and unlines, for instance. The benefit of do is that it combines expressions with common structure and compatible--but not identical--types.

In fact, the same general principle is true of the "idiom bracket" notation from the Applicative paper: Where do blocks use newlines to elide monadic construction, idiom brackets use juxtaposition to elide lifted function application. The proc notation for Arrow is also similar, and other concepts could be cleanly expressed in such fashion as well, such as:

  • Composing data structures, e.g. merging result sets of some sort, eliding the merge function
  • Other function application idioms, such as argument-first "forward pipe" style, eliding the application operator
  • Parallel computations, eliding the result aggregation function

Although it's not too hard to make many of these into either a single type or a full Monad instance, it might be nice to have a unified, extensible bit of syntactic sugar for the general concept. There's certainly a common thread tying together all these and more, but that's a much larger topic not really related to syntax...

like image 28
C. A. McCann Avatar answered Sep 30 '22 16:09

C. A. McCann