Haskell syntax requires relatively noisy f . g $ 3
compared to 3 g f
as in stack-oriented languages. What were main design arguments for this choice?
That could also be written f (g 3)
.
Why is Haskell not a concatenative language?
Based on A History of Haskell, it was influenced by a variety of functional programming and lazy language experiments, including ML. As section 4, Syntax describes:
Currying
Following a tradition going back to Frege, a function of two arguments may be represented as a function of one argument that itself returns a function of one argument. This tradition was honed by Moses Sch ̈onfinkel and Haskell Curry and came to be called currying. Function application is denoted by juxtaposition and associates to the left. Thus,
f x y
is parsed(f x) y
. This leads to concise and powerful code. For example, to square each number in a list we writemap square [1,2,3]
, while to square each number in a list of lists we writemap (map square) [[1,2],[3]]
. Haskell, like many other languages based on lambda calculus, supports both curried and uncurried definitions,
The concept of currying is so central to Haskell's semantics and the lambda calculus at its core that any other method of arrangement would interact poorly with the language.
3 g f
is such a language is rather f $ g $ 3
in Haskell. Of course, that's equivalent to f . g $ 3
, but it only works as long as you immediately apply the composition to some value. In Haskell, you very often compose functions just to hand them to some higher-order combinator, or to make a point-free definition. In a stack-oriented language that requires some sort of explicit block, in Haskell it requires just the .
operator.Usually, you don't just chain "atomic" functions. Certainly you don't deal with globally-named single-letter functions, so the tiny .
or $
doesn't really make a dramatic difference verbosity-wise. And very often, as rmmh said, you chain partially applied functions, e.g.
main = interact $ unlines . take 10 . filter ((>20) . length) . lines
That's much more cumbersome without cheap tight-binding application. Also, it's very natural to have the seperating .
to mark what's not immediately applied but just composed.
If you're interested in the history of Haskell, Hudak, Hughes, Peyton Jones & Wadler's "A History of Haskell: Being Lazy with Class" is the best-known paper on this topic, and well-worth reading.
It doesn't address your question directly, but it does point out one very relevant fact: Haskell was created as a unifying compromise between a bunch of existing languages from small teams. Quoting section 2.2 ("A tower of Babel"):
As a result of all this activity, by the mid-1980s there were a number of researchers, including the authors, who were keenly interested in both design and implementation techniques for pure, lazy languages. In fact, many of us had independently designed our own lazy languages and were busily building our own implementations for them. We were each writing papers about our efforts, in which we first had to describe our languages before we could describe our implementation techniques. Languages that contributed to this lazy Tower of Babel include:
- Miranda […]
- Lazy ML (LML) […]
- Orwell […]
- Alfl […]
- Id […]
- Clean […]
- Ponder […]
- Daisy […]
So the answer may simply be that Haskell copied this from its predecessor languages. And since a bunch of these languages were in turn based or inspired by Lisp and ML, they may analogously have copied it from them. So back to quote your question:
What were main design arguments for this choice?
Chances are that there was never a sustained argument for the choice. Very few high-level languages have gone for the stack-based design, in any case, and few people know them.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With