Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Making a concatenative Haskell variant: precedence of application and composition

I'm learning the basics of concatenative languages, whose original idea is that function name concatenation is the same as function composition, instead of being function application as in Haskell.

Joy, Forth or Factor are postfix, which means stack based, but there are some prefix concatenative languages as well, such as Om.

I wonder if a Haskell variant could theoretically be a concatenative language just by swapping (or even equaling) the composition precedence (now 9) with the function application precedence (now 10).

If values in Haskell are just zero-argument functions, why is function application different than function composition?, is not function application the same as composing with a zero-argument function?.

Would it be possible in a simple way to make an interpreter or precompiler which transforms concatenative syntax to Haskell syntax by defining new composition and application operators with different precedence, and assuming that simple concatenation without parenthesis is composition?. I think that it is just a question of syntax, am I wrong?, and it would avoid many of the cases where we have to use parenthesis or $ operator in Haskell. Or is it a more fundamental problem, not just syntax and precedence?

Hint: suppose that every function and operator in Haskell is prefix, we can forget for this exercise about infix notation and all kinds of "syntactic sugar".

like image 235
enrique Avatar asked Dec 14 '14 01:12

enrique


2 Answers

If values in Haskell are just constant functions, why is function application different than function composition?, is not function application the same as composing with a constant function?.

Values in Haskell are not “constant functions”. Nor are they “nullary functions”. The only things in Haskell that are functions are those whose type includes the function-arrow constructor ->.

A constant function is one which, given any input, returns the same output.

alwaysOne x = 1

map alwaysOne [1..5] == [1, 1, 1, 1, 1]

The derivative of such a function is 0. The const function constructs such functions conveniently, by ignoring its second argument and always returning the first.

map (const 1) [1..5] == [1, 1, 1, 1, 1]

The concept of a “nullary function” only makes sense in a language in which functions may take multiple arguments—in Haskell, definitions of functions with multiple arguments are syntactic sugar for definitions of chained functions with one argument, in a process known as currying. All these definitions are equivalent.

foo x y = x + y

foo x = \y -> x + y

foo = \x -> \y -> x + y

(In practice, for efficiency reasons, GHC’s runtime deals in multi-argument functions and only constructs closure objects for partially applied functions.)

I think that it is just a question of syntax, am I wrong?

The basic idea of concatenative languages is that programs denote functions, and concatenating two programs gives you a program that denotes the composition of those functions. So if f is a function, and g is a function, then f g is a program that denotes what we would write in Haskell as g . f. In abstract algebra terms, there is a homomorphism from the syntactic monoid onto the semantic monoid. That is the issue of syntax.

However, there is also an issue of semantics. These functions must manipulate the program state being implicitly passed between them, and the state of a real program is complex—so in practice, concatenative languages tend to use a tuple denoting a stack of values, because this is simple and efficient to implement in real hardware. In theory, the program state could be anything, such as a map or a set.

I think the semantics of Haskell are the real stumbling-block here, and while you can embed a concatenative DSL into Haskell, you need a concatenative language in order to make this usable for day-to-day programming.

like image 116
Jon Purdy Avatar answered Sep 20 '22 17:09

Jon Purdy


The best answer for my question is the article referenced by @Daniel Wagner on his second comment, "Concatenative, Row-polymorphic Programming in Haskell", which was written by Sami Hangaslammi as an answer to another good article by @Jon Purdy, "Why Concatenative Programming Matters".

It shows "one way to implement a concatenative DSL inside Haskell", which is what I really wanted to know if it was possible, and how.

like image 26
enrique Avatar answered Sep 18 '22 17:09

enrique