Scala and Haskell have "Turing complete type systems". Usually, Turing completeness refers to computations and languages. What does it really mean in the context of types?
Could some one give an example of how a programmer can benefit from it?
PS I don't want to compare Haskell's vs Scala's type systems. It's more about the term in general.
PSS If it's possible more Scala examples.
A Turing machine is a machine with a symbol reader, an infinitely long tape of symbols, and a set of rules for modifying the tape. A language is Turing complete if it can implement a simulation of any Turing machine.
If a language is not Turing complete, there are computational problems it cannot solve. So, purely from a view internal to the language, you can't necessarily do everything you want. If you want to use a non-Turing complete language to design some kind of computer architecture, you'll also run into problems.
In general, for an imperative language to be Turing-complete, it needs: A form of conditional repetition or conditional jump (e.g., while , if + goto ) A way to read and write some form of storage (e.g., variables, tape)
To prove that a language or device is Turing Complete all you have to do is show that it can be used to implement a Universal Turing machine. A language that has a default flow of control, conditional execution and repetition is Turing Complete with only minor additional requirements.
What does it really mean in the context of types?
It means the type system has enough features in it to represent arbitrary computations. As a very short proof, I present below a type-level implementation of the SK
calculus; there are many places that discuss the Turing-completeness of this calculus and what it means, so I won't rehash that here.
{-# LANGUAGE DataKinds #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE TypeOperators #-} infixl 1 `App` data Term = S | K | App Term Term type family Reduce t where Reduce S = S Reduce K = K Reduce (S `App` x `App` y `App` z) = Reduce (x `App` z `App` (y `App` z)) Reduce (K `App` x `App` y) = Reduce x Reduce (x `App` y) = Reduce (Reduce x `App` y)
You can see this in action at a ghci prompt; for example, in the SK
calculus, the term SKSK
reduces (eventually) to just K
:
> :kind! Reduce (S `App` K `App` S `App` K) Reduce (S `App` K `App` S `App` K) :: Term = 'K
Here's a fun one to try as well:
> type I = S `App` K `App` K > type Rep = S `App` I `App` I > :kind! Reduce (Rep `App` Rep)
I won't spoil the fun -- try it yourself. But know how to terminate programs with extreme prejudice first.
Could some one give an example how a programmer can benefit from it?
Arbitrary type-level computation allows you to express arbitrary invariants on your types, and have the compiler verify (at compile-time) that they are preserved. Want a red-black tree? How about a red-black tree that the compiler can check preserves the red-black-tree invariants? That would be handy, right, since that rules out a whole class of implementation bugs? How about a type for XML values that is statically known to match a particular schema? In fact, why not go a step further and write down a parameterized type whose parameter represents a schema? Then you could read in a schema at runtime, and have your compile-time checks guarantee that your parameterized value can only represent well-formed values in that schema. Nice!
Or, perhaps a more prosaic example: what if you wanted your compiler to check that you never indexed your dictionary with a key that wasn't there? With a sufficiently advanced type system, you can.
Of course, there's always a price. In Haskell (and probably Scala?), the price of a very exciting compile-time check is spending a great deal of programmer time and effort convincing the compiler that the thing you're checking is true -- and this is often both a high up-front cost as well as a high ongoing maintenance cost.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With