The Structure and Interpretation of Computer Programs book I've been reading presents Church numerals by defining zero and an increment function
zero: λf. λx. x
increment: λf. λx. f ((n f) x)
This seemed pretty complicated to me and it took me a really long time to figure it out and derive one (λf.λx. f x
) and two (λf.λx. f (f x)
).
Wouldn't it be much simpler to encode numbers this way instead, with zero being the empty lambda?
zero: λ
increment: λf. λ. f
Now it's trivial to derive one (λ. λ
) and two (λ. λ. λ
), and so on.
This seems like a much more immediately obvious and intuitive way to represent numbers with lambdas. Is there some problem with this approach and thus a good reason why Church numerals work the way they do? Is this approach already attested?
Your encoding (zero: λx.x
, one: λx.λx.x
, two: λx.λx.λx.x
, etc.) makes it easy to define increment and decrement but beyond that, it becomes pretty tricky to develop combinators for your encoding. For example, how would you define isZero
?
An intuitive way to think about the Church encoding is that a numeral n
is represented by the action of iterating n
times. This makes it easy to develop combinators like plus
by just using the iteration encoded in the number. No need for fancy combinators for recursion.
In the Church encoding, each number has the same interface: it takes two arguments. While in your encoding, each number is defined by the number of arguments it takes, which makes it really tricky to operate uniformly on.
Another way to encode numerals, would be to think of numbers as n = 0 | S n, and use a vanilla encoding for unions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With