Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Have I implemented Y-combinator using C# dynamic, and if I haven't, what is it?

My brain seems to be in masochistic mode, so after being drowned in this, this and this, it wanted to mess around with some DIY in C#.

I came up with the following, which I don't think is the Y-combinator, but it does seem to manage to make a non-recursive function recursive, without referring to itself:

Func<Func<dynamic, dynamic>, Func<dynamic, dynamic>> Y = x => x(x);

So given these:

Func<dynamic, Func<dynamic, dynamic>> fact = 
                  self => n => n == 0 ? 1 : n * self(self)(n - 1);
Func<dynamic, Func<dynamic, dynamic>> fib = 
                  self => n => n < 2 ? n : self(self)(n-1) + self(self)(n-2);

We can generate these:

Func<dynamic, dynamic> Fact = Y(fact);
Func<dynamic, dynamic> Fib = Y(fib);

Enumerable.Range(0, 10)
          .ToList()
          .ForEach(i => Console.WriteLine("Fact({0})={1}", i, Fact(i)));

Enumerable.Range(0, 10)
          .ToList()
          .ForEach(i => Console.WriteLine("Fib({0})={1}", i, Fib(i)));
like image 605
Benjol Avatar asked Oct 05 '11 09:10

Benjol


1 Answers

No, that's not the Y combinator; you're only halfway there. You still need to factor out the self-application within the "seed" functions you're applying it to. That is, instead of this:

Func<dynamic, Func<dynamic, dynamic>> fact = 
                  self => n => n == 0 ? 1 : n * self(self)(n - 1);

you should have this:

Func<dynamic, Func<dynamic, dynamic>> fact = 
                  self => n => n == 0 ? 1 : n * self(n - 1);

Note the single occurrence of self in the second definition as opposed to the two occurrences in the first definition.

(edited to add:) BTW, since your use of C# simulates the lambda calculus with call-by-value evaluation, the fixed-point combinator you want is the one often called Z, not Y

(edited again to elaborate:) The equation that describes Y is this (see the wikipedia page for the derivation):

Y g = g (Y g)

But in most practical programming languages, you evaluate the argument of a function before you call the function. In the programming languages community, that's called call-by-value evaluation (not to be confused with the way C/C++/Fortran/etc programmers distinguish "call by value" vs "call by reference" vs "call by copy-restore", etc).

But if we did that, we'd get

Y g = g (Y g) = g (g (Y g)) = g (g (g (Y g))) = ...

That is, we'd spend all of our time constructing the recursive function and never get around to applying it.

In call-by-name evaluation, on the other hand, you apply a function, here g, to the unevaluated argument expression, here (Y g). But if g is like fact, it's waiting for another argument before it does anything. So we would wait for the second argument to g before trying to evaluate (Y g) further---and depending on what the function does (ie, if it has a base case), we might not need to evaluate (Y g) at all. That's why Y works for call-by-name evaluation.

The fix for call-by-value is to change the equation. Instead of Y g = g (Y g), we want something like the following equation instead:

Z g = g (λx. (Z g) x)

(I think I got the equation right, or close to right. You can calculate it out and see if it fits with the definition of Z.)

One way to think of this is instead of computing "the whole recursive function" and handing it to g, we hand it a function that will compute the recursive function a little bit at a time---and only when we actually need a bit more of it so we can apply it to an argument (x).

like image 107
Ryan Culpepper Avatar answered Sep 18 '22 12:09

Ryan Culpepper