Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is function definition for all types at once not allowed in Haskell?

Tags:

This is probably a very basic question, but ...

A function that's defined as, say

foo :: a -> Integer 

denotes a function from any type to an Integer. If so, then in theory one should be able to define it for any type, like so

foo 1 = 10 foo 5.3 = 100 foo (x:xs) = -1 foo  _     = 0 

But Haskell only allows a general definition, like foo a = 0.

And even if you restrict a to be one of a certain class of types, such as an instance of the Show typeclass :

foo :: (Show a) => a -> Integer 

you still can't do something like

foo "hello" = 10 foo   _     = 0 

even though "hello" :: [Char] is an instance of Show

Why is there such a restriction?

like image 633
user1425230 Avatar asked May 30 '12 07:05

user1425230


People also ask

How are functions defined in Haskell?

The most basic way of defining a function in Haskell is to ``declare'' what it does. For example, we can write: double :: Int -> Int double n = 2*n. Here, the first line specifies the type of the function and the second line tells us how the output of double depends on its input.

Does every Haskell function have a type?

Everything in Haskell has a type, so the compiler can reason quite a lot about your program before compiling it. Unlike Java or Pascal, Haskell has type inference.

Does every Haskell function terminate?

> Haskell allows non-terminating "functions", for example. Non-terminating functions are still pure functions in the mathematical sense. While it does somewhat complicate the use of programs as proofs, a pure function doesn't need to have a defined value for every possible input.

How does show function work in Haskell?

The shows functions return a function that prepends the output String to an existing String . This allows constant-time concatenation of results using function composition.


2 Answers

It's a feature, and actually is very fundamental. It boils down to a property known as parametricity in programming language theory. Roughly, that means that evaluation should never depend on types that are variables at compile time. You cannot look at a value where you do not know its concrete type statically.

Why is that good? It gives much stronger invariants about programs. For example, you know from the type alone that a -> a has to be the identity function (or diverges). Similar "free theorems" apply to many other polymorphic functions. Parametricity also is the basis for more advanced type-based abstraction techniques. For example, the type ST s a in Haskell (the state monad), and the type of the corresponding runST function, rely on s being parametric. That ensures that the running function has no way of messing with the internal representation of the state.

It is also important for efficient implementation. A program does not have to pass around costly type information at run time (type erasure), and the compiler can choose overlapping representations for different types. As an example of the latter, 0 and False and () and [] are all represented by 0 at runtime. This wouldn't be possible if a function like yours was allowed.

like image 107
Andreas Rossberg Avatar answered Sep 19 '22 08:09

Andreas Rossberg


Haskell enjoys an implementation strategy known as "type erasure": types have no computational significance, so the code that you emit doesn't need to track them. This is a significant boon for performance.

The price you pay for this performance benefit is that types have no computational significance: a function can't change its behavior based on the type of an argument it is passed. If you were to allow something like

f () = "foo" f [] = "bar" 

then that property would not be true: the behavior of f would, indeed, depend on the type of its first argument.

There are certainly languages that allow this kind of thing, especially in dependently typed languages where types generally can't be erased anyway.

like image 43
Daniel Wagner Avatar answered Sep 20 '22 08:09

Daniel Wagner