Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the relation between syntax sugar, laziness and list elements accessed by index in Haskell?

Haskell lists are constructed by a sequence of calls to cons, after desugaring syntax:

Prelude> (:) 1 $ (:) 2 $ (:) 3 []
[1,2,3]

Are lists lazy due to them being such a sequence of function calls? If this is true, how can the runtime access the values while calling the chain of functions? Is the access by index also a syntax sugar? How could we express it in other way, less sugared than this?:

Prelude> (!!) lst 1
2

The underlying question here might be:

Are lists fundamental entities in Haskell, or they can be expressed as composition of more essential concepts?

Are there possiblity to represent lists in the simplest lambda calculus?

I am trying to implement a language where the list is defined at the standard libray, not as a special entity directly hardwired in the parser/interpreter/runtime.

like image 635
dawid Avatar asked Aug 30 '21 04:08

dawid


People also ask

What is syntactic sugar in Haskell?

Syntactic sugar refers to any redundant type of syntax in a programming language that is redundant to the main syntax but which (hopefully) makes the code easier to understand or write.

What does B mean in Haskell?

In b Bool , b stands for a parametrized type that takes one type parameter (in Haskell parlance, b is a type constructor of kind * -> * ), such as Maybe , IO or [] . So a function of type a -> b Bool could for example take an Int and produce a Maybe Bool , IO Bool , [Bool] etc.

What does -> mean in Haskell?

(->) is often called the "function arrow" or "function type constructor", and while it does have some special syntax, there's not that much special about it. It's essentially an infix type operator. Give it two types, and it gives you the type of functions between those types.

What is syntactic sugar in Haskell?

Syntactic Sugar is the name for special notations for special applications. They don't add functionality to a language, they are plain textual replacements for expressions that could also be written in a more analytic way. Haskell employs a lot of syntactic sugar.

Why is Haskell so hard to write?

They don't add functionality to a language, they are plain textual replacements for expressions that could also be written in a more analytic way. Haskell employs a lot of syntactic sugar.

What is syntactic sugar in programming?

The term syntactic sugar was coined by Peter J. Landin in 1964 to describe the surface syntax of a simple ALGOL -like programming language which was defined semantically in terms of the applicative expressions of lambda calculus, centered on lexically replacing λ with "where".

What is Haskell lists?

Haskell Lists: The Ultimate Guide Work In Progress Danger This guide is "Work in progress" Haskell Lists: Two big Caveats There are two major differencesin Haskell lists, compared to other languages, especially dynamically typed languages, like Python, Ruby, PHP, and Javascript. First, lists in Haskell are homogenous.


1 Answers

Lists in Haskell are special in syntax, but not fundamentally.

Fundamentally, Haskell list is defined like this:

data [] a = [] | (:) a ([] a)

Just another data type with two constructors, nothing to see here, move along.

The above is kind of a pseudocode though, because you can't actually define something like that yourself: neither [] nor (:) is a valid constructor name. A special exception is made for the built-in lists.

But you could define the equivalent, something like:

data MyList a = Nil | Cons a (MyList a)

And this would work exactly the same with regards to memory management, laziness, and so on, but it won't have the nice square-bracket syntax (at least in Haskell 2010; in modern GHC you can get the special syntax for your own types too, thanks to overloaded lists).


As far as laziness goes, that's not special to lists, but is special to data constructors, or more precisely, pattern matching on data constructors. In Haskell, every computation is lazy. This means that whatever crazy chain of function calls you may construct, it's not evaluated right away. Doesn't matter if it's a list construction or some other function call. Nothing is evaluated right away.

But when is it evaluated then? The answer is in the spec: a value is evaluated when somebody tries to pattern match on it, and at that moment it's evaluated up to the data constructor being matched. So, for lists, this would be when you go case myList of { [] -> "foo"; x:xs -> "bar" } - that's when the call chain is evaluated up to the first data constructor, which is necessary in order to decide whether that constructor is [] or (:), which is necessary for evaluating the case expression.


Index access is also not special, it works on the exact same principle: the implementation of the (!!) operator (check out the source code) repeatedly (recursively) matches on the list until it discovers N (:) constructors in a row, at which point it stops matching and returns whatever was on the left of the last (:) constructor.


In the "simplest" lambda calculus, in the absence of data constructors or primitive types, I reckon your only choice is to Church-encode lists (e.g. as a fold, or directly as a catamorphism) or build them up from other structures (e.g. pairs), which are themselves Church-encoded. Lambda calculus, after all, only has functions and nothing else. Unless you mean something more specific.

like image 161
Fyodor Soikin Avatar answered Oct 18 '22 02:10

Fyodor Soikin