When I read the Church Rosser II Theorem here
Theorem (Church Rosser II)
If there is one terminating reduction, then outermost reduction will terminate, too.
I'm wondering: Is there some theorem which strengthens the Church Rosser II Theorem so that it tells about the asymptotic time complexity instead of termination?
Or, can it be proved that the call-by-need strategy has the minimal asymptotic time complexity among all reduction strategies?
Certainly not. Consider
f x y = sum [1..x] + y
g x = sum (map (f x) [1..x])
Call-by-need reduction of g x
will perform O(x^2) additions, but really only O(x) are necessary. For example, lazy HNF will get us this complexity.
-- The definition f3 will get lazily refined.
let f3 y = f 3 y = sum [1,2,3] + y
g 3 = sum (map f3 [1,2,3])
= sum [f3 1, f3 2, f3 3]
-- Now let's refine f3 to HNF *before* applying it
f3 y = sum [1,2,3] + y
= 6 + y
-- And continue g 3
g 3 = sum [f3 1, f3 2, f3 3]
-- no need to compute sum [1..x] three times
= sum [6 + 1, 6 + 2, 6 + 3]
= ...
I handwaved the evaluation order quite a bit here, but hopefully you get the idea. We reduce a function body to HNF before applying it, thus sharing any computations that don't depend on the argument.
For a whole lot more about this, see Michael Jonathan Thyer's Lazy Specialization.
I think your question is a bit confused. Please, let me clarify a few points.
First of all the theorem you mention is traditionally known as standardization theorem, and is due to Curry and Feys (Combinatory Logic I, 1958), generalized to eta reduction by Hindley (Standard and normal reductions), and then revised by many other authors (see e.g. this question ).
Secondly, outermost reduction is different from call by need (call by need is not even a reduction strategy in the traditional sense of the word).
Coming to the complexity issue, that is the core of the question, call by need is optimal only with respect to weak reduction. However, weak reduction is not always the best way for reducing lambda terms. A well know example is the following term
n 2 I I
where n and 2 are church integers, and I is an identity. I add two I at the end, otherwise weak-reduction languages would stop the computation too early.
Observe that the term reduces to
2 (2 (... (2 I))..) I
and (2 I) reduces to I. So, with innermost reduction you would be able to reduce the term in linear time w.r.t n.
On the other side, I invite you to perform the previous computation in Haskell, and you will discover that the reduction time grows exponentially in n. I leave to you to understand the reasons of this phenomenon.
A similar discussion has been done in this thread.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With