Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What does type level programming mean at runtime?

Tags:

haskell

I am very new to Haskell, so sorry if this is a basic question, or a question founded on shaky understanding

Type level programming is a fascinating idea to me. I think I get the basic premise, but I feel like there is another side to it that is fuzzy to me. I get that the idea is to bring logic and computation into the compiletime instead of runtime, using types. This way you turn what is normally runtime logic/state/data into static logic, e.g. the size of collections.

So I get that for example you can have type level natural numbers, and do type level arithmetic on those natural numbers, and all this calculation and type safety is going on at compile time.

But what does such arithmetic imply at runtime? Especially since Haskell has full type erasure. So for example

  • If I concatenate two type level lists, then does the type level safety imply something about the behavior or performance of that concatenation at runtime? Or does the type level programming aspect only have meaning at compile time, when the programmer is grappling the code and putting things together?
  • Or if I have two type level numbers, and then multiply them, what does that mean at runtime? If these operations on large numbers are slow at compile time, are they instantaneous at runtime?
  • Or if we implemented type level RSA and then use it, what does that even mean at runtime?

Is it purely a compiletime safety/coherence tool? or does type level programming buy us anything for the runtime too? Is the logic and arithmetic 'paid for at compile time' or merely 'assured at compile time' (if that even makes sense)?

like image 838
World Outsider Avatar asked May 11 '18 00:05

World Outsider


1 Answers

As you rightly say, Haskell [without weird extensions] has full type erasure. So that means anything computed purely at the type level is erased at runtime.

However, to do useful stuff, you connect the type-level stuff with your value-level stuff to provide useful properties.

Suppose, for example, you want to write a function that takes a pair of lists, treats them as mathematical vectors, and performs a vector dot-product with them. Now the dot-product is only defined on pairs of vectors of the same size. So if the size of the vectors doesn't match, you can't return a sensible answer.

Without type-level programming, your options are:

  • Require that the caller always supplies vectors of the same dimension, and cheerfully return gibberish if that requirement is not met. (I.e., ignore the problem.)
  • Perform an explicit check at run-time, and throw an exception or return Nothing or similar if the dimension don't match.

With type-level programming, you can make it so that if the dimensions don't match, the code does not compile! So that means at run-time you don't need to care about mismatched dimension, because... well, if your code is running, then the dimension cannot be mismatched.

The types have all been erased by this point, but you are still guaranteed that your code cannot crash / return gibberish, because the compiler has checked that that cannot happen.

It's really the same as the ordinary checks the compiler does to make sure you don't try to multiply an integer by a string or something. The types are all erased before runtime, and yet the code does not crash.


Of course, to do a dot-product, we merely have to check that two numbers are equal. We don't need any arithmetic yet. But it should be clear that to check whether the dimensions of our vectors match, we need to know the dimensions of our vectors. And that means that any operations that change the dimension of our vectors needs to do compile-time calculations, so the compiler can know the result size and check it satisfies the requirements.

You can also do more elaborate stuff. Somewhere I saw a library that lets you define a client/server communications protocol, but because it encodes the protocol into ludicrously complicated type signatures [which the compiler automatically infers], it can statically prove that the client and server implement exactly the same protocol (i.e., no bugs with the server not handling one of the messages the client can send). The types get erased at runtime, but we still know the wire protocol can't go wrong.

like image 92
MathematicalOrchid Avatar answered Nov 07 '22 07:11

MathematicalOrchid