Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Proof of the Futamura projections in Haskell

Tags:

I read Dan Piponi's excellent blog post on The Three Projections of Doctor Futamura. Towards the end of the article he has an appendix with a proof of the Futamura projections in Haskell. However, I find his article lacking information about the languages involved. What must the source, target and object languages of the specializer be in order for the Futamura projections to work? For example, would the Futamura projections work if I wrote a Haskell to LLVM specializer in Haskell? It would be helpful if you wrote a Haskell program to prove this just like Dan Piponi did in his article.

like image 424
Aadit M Shah Avatar asked Sep 01 '17 02:09

Aadit M Shah


1 Answers

Yes, the Futamura projections will work if and only if the source and object languages of the specializer are the same. This is because the specializer can only be applied to itself if it's written in the same language that it can read. However, the target language of the specializer is independent of the other two. This has important consequences which I'll discuss later in this answer.

To prove my hypothesis I'll introduce a new notation to describe programs loosely based on tombstone diagrams. A tombstone diagram (or T-diagram) is a pictorial representation of compilers and other related metaprograms. They are used to illustrate and reason about the transformation of a program from a source language (left of T) to a target language (right of T) as implemented in an object language (bottom of T). Let's extend the idea of T-diagrams to work for all programs:

α → β : ℒ -- A program is a function from α to β as implemented in language ℒ.

In the case of metaprograms α and β are themselves programs:

(α → β : 𝒮) × α → β : 𝒪 -- An interpreter for language 𝒮 as implemented in 𝒪.
(α → β : 𝒮) → (α → β : 𝒯) : 𝒯 -- A compiler from 𝒮 to 𝒯 as implemented in 𝒯.
(ι × α → β : 𝒮) × ι → (α → β : 𝒯) : 𝒮 -- A self-hosting specializer from 𝒮 to 𝒯.
(ι × α → β : 𝒮) → (ι → (α → β : 𝒯) : 𝒯) : 𝒯 -- A compiler compiler from 𝒮 to 𝒯.

This notation can be directly converted into type definitions in Haskell. Armed with this, we can now write a proof of the Futamura projections in Haskell with respect to languages:

{-# LANGUAGE RankNTypes #-}

module Futamura where

newtype Program a b language = Program { runProgram :: a -> b }

type Interpreter source object = forall a b.       Program (Program a b source, a) b object
type Compiler    source target = forall a b.       Program (Program a b source) (Program a b target) target
type Specializer source target = forall input a b. Program (Program (input, a) b source, input) (Program a b target) source
type Partializer source target = forall input a b. Program (Program (input, a) b source) (Program input (Program a b target) target) target

projection1 :: Specializer object target -> Interpreter source object -> Program a b source -> Program a b target
projection1 specializer interpreter program = runProgram specializer (interpreter, program)

projection2 :: Specializer object target -> Interpreter source object -> Compiler source target
projection2 specializer interpreter = runProgram specializer (specializer, interpreter)

projection3 :: Specializer source target -> Partializer source target
projection3 specializer = runProgram specializer (specializer, specializer)

We use the RankNTypes language extension to hide the type-level machinery, allowing us to concentrate on the languages involved. It's also necessary for applying a specializer to itself.

In the above program, the second projection is of particular interest. It tells us that given a self-hosting Haskell to LLVM specializer, we can apply it to any interpreter written in Haskell for some source language 𝒮 to get an 𝒮 to LLVM compiler. This means that we can write an interpreter in a high-level language and use it to generate a compiler that targets a low-level language. If the specializer is any good then the generated compiler would also be decently good.

Another noteworthy detail is that a self-hosting specializer is very similar to a self-hosting compiler. If your compiler already performs partial evaluation then it shouldn't be too much work to turn it into a specializer. This means that implementing the Futamura projections is a lot easier and a lot more rewarding than originally believed to be. I haven't yet tested this so take it with a grain of salt.

like image 92
Aadit M Shah Avatar answered Oct 22 '22 15:10

Aadit M Shah