I'm trying to implement boolean without Prelude in Haskell.
When expression, beq true true "TRUE" "FALSE"
is evaluated, it's okay. But when I try to evaluate beq' true true "TRUE" "FALSE"
, it fails by some difference between expected type and actual type.
This is the code.
import qualified Prelude as P
i = \x -> x
k = \x y -> x
ki = k i
true = k
false = ki
not = \p -> p false true
beq = \p q -> p (q true false) (q false true)
beq' = \p q -> p q (not q)
So I checked inferred types of theses.
*Main> :type beq
beq
:: (t1 -> t1 -> t2)
-> ((p1 -> p1 -> p1) -> (p2 -> p2 -> p2) -> t1) -> t2
*Main> :type beq'
beq'
:: (((p1 -> p2 -> p2) -> (p3 -> p4 -> p3) -> t1) -> t1 -> t2)
-> ((p1 -> p2 -> p2) -> (p3 -> p4 -> p3) -> t1) -> t2
And it was not equal.
Here are the questions.
I thought it has the same type signature because beq
and beq'
seemingly make the same result when it folded and substituted. Like there are many ways to implement one function. But it wasn't. Are there some secret rules and syntax in Haskell?
If I want to write beq
with the function not
, how can I make it works?
Church encodings work very well in an untyped calculus.
When types are added, things get more complicated. With simple types, for instance, the encodings are lost. With polymorphism they can be recovered, if higher ranks are supported. Note that type inference can't work well with higher types, so some explicit type annotation is needed.
For example, your not
should be written as:
{-# LANGUAGE RankNTypes #-}
type ChBool = forall a. a -> a -> a
not :: ChBool -> ChBool
not f x y = f y x
It is important that boolean values are modeled as polymorphic functions, since otherwise they can only be used on a single type, making many examples fail. For instance, consider
foo :: Bool -> (Int, String)
foo b = (b 3 2, b "aa" "bb")
Here b
needs to be used twice, once on Int
s and once on String
s. If Bool
is not polymorphic this will fail.
Further, you seem to be convinced that you can beta-reduce Haskell expressions and the inferred type before and after the reduction must be the same. That's not true, in general, as you discovered in your experiments. To see why, here's a simple example:
id1 x = x
The inferred type here is id1 :: forall a. a -> a
, obviously. Consider instead this variant:
id2 x = (\ _ -> x) e
Note that id2
beta-reduces to id1
, whatever e
might be. By choosing e
carefully, though, we can restrict the type of x
. E.g. let's choose e = x "hello"
id2 x = (\ _ -> x) (x "hello")
Now, the inferred type is id2 :: forall b. (String -> b) -> String -> b
since x
can only be a String
-accepting function. It does not matter that e
will not be evaluated, the type inference algorithm will make that well-typed anyway. This makes the inferred type of id2
differ from the one of id1
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With