Pure untyped lambda calculus is a powerful concept. However, building a machine or interpreter for real-world use is often described as (close to) impossible. I want to investigate this. Is it theoretically possible to build a comparatively fast untyped lambda calculus machine?
By comparatively fast I generally mean comparable to modern Turing-like architectures for a similar range of tasks, within a similar amount of resources (gates, operations, physical space, power use, etc).
I place no limitations on the implementation and architectural layers of the machine, except that it must be physically and somewhat realistically realizeable in some way. No restrictions on how to handle IO either.
How much is known about the feasibility of a computer architecture based around lambda calculus?
Questions covering similar ground:
The benefit of lambda calculus is that it's an extremely simple model of computation that is equivalent to a Turing machine. But while a Turing machine is more like assembly language, lambda calculus is more a like a high-level language.
Lambda calculus (also written as λ-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is a universal model of computation that can be used to simulate any Turing machine.
First, it is possible to compile the lambda calculus efficiently to machine code even on existing architectures. After all, scheme is the lambda calculus plus a bit extra, and it can be compiled efficiently. However, scheme & co are the lambda calculus under strict evaluation. It is also possible to compile the lambda calculus under non-strict evaluation efficiently! On this, see SPJ's two books for some background: http://research.microsoft.com/en-us/um/people/simonpj/papers/papers.html
On the other hand, it is also true that if we built hardware designed for functional languages, we could compile code to that hardware and do very well indeed. The best new stuff on this I know of is the Reduceron: http://www.cs.york.ac.uk/fp/reduceron/
The key to the performance of the Reduceron, which is quite compelling, is that it is built around parallel graph reduction, and aims to exploit the opportunities for parallelism made explicit in the reduction of lambda calculus equations.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With