In Haskell there is a term undefined :: a
, which is said to be a term
representation of a bad computation or endless loop. Since undefined
has
type a
, them it's possible to create any syntactic correct type by
applying type substitutions. So undefined
has any type and the o inverse is also true: any type has undefined
, which is the bottom value living inside of any type (including the Void
type, right?).
Curry-Howard isomorphism gives more than proposition as types, it also gives habited types as theorems.
A logic system with all propositions as theorems is said to be inconsistent.
So, Haskell's type system isomorphic to an inconsistent logic system?
If so, what are the consequences?
If Haskell's type system is a inconsistent proof system, them we can't trust it?
Would be possible to represent endless loop with no undefined
?
If Haskell's type system is a inconsistent proof system, then we can't trust it?
As @chi alluded to, type safety is a much weaker condition than logical consistency. Quoting Types and Programming Languages (a good read if you're interested in this sort of thing!),
Safety = Progress + Preservation
[...] What we want to know, then, is that well-typed terms do not get stuck. We show this in two steps, commonly known as the progress and preservation theorems.
- Progress: a well-typed term is not stuck (either it is a value or it can take a step according to the evaluation rules).
- Preservation: If a well-typed term takes a step of evaluation, then the resulting term is also well-typed.
These properties together tell us that a well-typed term can never reach a stuck state during evaluation.
Note that this definition of type safety doesn't exclude the possibility that a well-typed term will loop for ever, or throw an exception. But those two rules do make the guarantee that if you've successfully obtained an Int
then it really is an Int
and not a Bool
or a Spoon
or Ennui
. (In Haskell this is somewhat complicated by laziness, but the basic idea remains true.) This is a very useful property, and programmers learn to rely upon it crucially in their daily workflow!
Allow me a digression. Just as logically-inconsistent-but-still-type-safe systems are useful and trustworthy in practice, type-unsafe-but-still-statically-checked systems are also useful in practice (though some may balk at calling them "trustworthy"). Look to the JavaScript ecosystem, where "gradually typed" systems such as TypeScript and Flow are designed to provide an additional layer of support for working programmers - because VanillaJS is a nightmare in even medium-sized codebases - without making bold promises like progress and preservation. (I hear the Python and PHP communities are also gradually (ha ha) adopting types.) TypeScript still successfully catches a large proportion of the human errors I make while programming, and supports changing code in the small and the large, despite not being "type safe" in the formal sense.
I guess my point is, the PL community has a tendency to place a great deal of emphasis on proving properties of systems (such as type safety), but that's often not how software engineers get their work done! The engineers I know care about whether a tool helps them write fewer bugs, not about whether it's been proven safe or consistent. I just think each group could learn a lot from the other! When the clever people in the ultra-typed community get together with the clever people in the untyped community to build tools for engineering, interesting things can happen.
Would be possible to represent endless loop with no
undefined
?
Sure. Lots of ways.
loop = loop
fib = 1 : 1 : zipWith (+) fib (tail fib)
repeat x = unfoldr (const (Just x))
main = forever $ print "cheese"
From a formal perspective all of these terms are ⊥
, but in practice it matters very much which ⊥
you're running.
If your question was really does the ability to write looping terms imply inconsistency? then the short answer is yes, viz let x = x in x :: forall a. a
. That's why proof assistants like Agda typically feature a termination checker which examines the syntax of your program and rejects suspicious-looking usages of general recursion. The longer answer is no, not exactly - you can embed the general-recursive parts of your program in a monad and then implement the semantics of that partiality monad with some external evaluator. (This is precisely the approach that Haskell takes to purity: put the impure parts in the IO
monad and delegate execution to the runtime system.)
In conclusion, yes, Haskell is inconsistent as a logic - every type is inhabited by an infinite family of ⊥
s with a variety of runtime behaviours, so you can easily "prove" any proposition. That (along with the overall clunkiness of dependent types) is why people don't use Haskell as a theorem prover. For engineering, though, the lesser guarantees of type safety are generally quite enough, and some of those ⊥
s are interesting!
Yes, the logic is inconsistent.
However, that does not mean that the type system is useless. For instance, the type system ensures we will never attempt to add integers and strings, or to project the first component out of a boolean. These usually are runtime type errors in some untyped languages.
We still do have some runtime errors, like non-exhaustive pattern matching (which I'd personally ban from the language :-P), error
, undefined
, exceptions, and so on. But at least we do not have type related ones. We also have non-termination (through infinite recursion), which is a necessary evil in a Turing complete language, and which would cause the inconsistency anyway.
Finally, I'd argue that Curry-Howard is useful even in such inconsistent setting. E.g. if I wanted to write f :: Either a b -> a
, I can immediately spot that this is not an intuitionistic theorem, hence f
can only be realized using the above runtime errors, so it's probably a bad idea to begin with, and if I need such an f
, I should reconsider my design.
Would it be also useful to have a well defined "total" fragment of Haskell, with the same types, yet without the bad things mentioned above? Absolutely! For instance, it would allow to optimize any expression of type ()
to the value ()
-- after all it has to evaluate to that. Also, e :: a :~: b
would similarly be optimized to something like unsafeCoerce Refl
, making it possible to write "proofs of equality between types" which would only require O(1) runtime cost. Right now, unfortunately such proofs have to be evaluated at runtime, causing a silly overhead only to ensure the proof is a "real" one, and not e.g. a disguised undefined
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With