Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the practical use for laziness as a built-in language feature?

It's fairly obvious why a functional programming language that wants to be lazy needs to be pure. I'm looking at the reverse question: if a language wants to be pure, is there a big advantage in being lazy? One argument, made by one of the designers of Haskell, is that it removes temptation; maybe, but I'm trying to weigh up the more concrete advantages.

Given that you want to do functional programming, what are the use cases where built-in laziness lets you express things more clearly, simply or concisely?

Stated simply: Why is laziness so important that you'd want to build it into the language?

(I'm looking for use cases more oriented towards an application rather than a demo - I know you can do things like producing an infinite list of prime numbers by filtering an infinite list of natural numbers, but who writes that ten times 'fore lunch...)

like image 250
rwallace Avatar asked Nov 26 '11 16:11

rwallace


2 Answers

"Nothing is evaluated until it is needed at another place" is a simplified metaphor which doesn't cover all aspects of lazy evaluation (e.g. it doesn't mention the strictness phenomena).

From theoretical standpoint, there are 3 ways to go when designing a pure language (of course if it's based on some kind of lambda calculus and not on more exotic evaluation models): strict, non-strict and total.

Each of them has its advantages and disadvantages, so you need to read corresponding research papers.

Total languages are most pure of the three. In the other two the non-termination can be seen as a side effect, so strictness and totality analysers must be built to keep an implementation efficient. Both analyses are undecidable, so the analyzers can never be complete.

However, the total languages are least expressive: it's impossible for a total language to be Turing complete. A frequent approach to get good enough expressiveness is to have a built-in proof system for well-founded recursion, which is not much easier to build than the analyzers for non-total languages.

From practical standpoint, non-strict semantics lets you more easily define control abstractions, as control structures are essentially non-strict. In a strict language you still need some places with non-strict semantics. E.g. if construct has non-strict semantic even in strict languages.

So if your language is strict, control structures are a special case. In contrast, a non-strict language can be uniformly non-strict - it doesn't have an inherent need in strict constructs.

As for "who writes that ten times 'fore lunch" - anyone who uses Haskell for their projects does. I think developing a non-toy project using a language (a non-strict language in your case) is a best way to grasp its advantages and disadvantages.

Below are a few generic usecases for laziness illustrated by non-toy examples:

  1. Cases when control flow is hard to predict. Think of attribute grammars when without laziness you have to perform a topological sort on attributes to resolve the dependensies. Re-sorting your code every time the dependency graph is changed is not practical. In Haskell you can implement the attribute grammar formalism without an explicit sorting, and there are at least two actual implementations on Hackage. The attribute grammars have wide application in compiler construction.

  2. The "generate and search" approach to solve many optimizaton problems. In a strict language you have to interleave generation and search, in Haskell you just compose separate generation and searching functions, and your code remains syntactically modular, but interleaved at runtime. Think of the traveling salesman problem (TSP), when you generate all possible tours and then search through them using a branch-and-bound algorithm. Note that branch an bound algorithms only inspects certain first cities of a tour, only the necessary parts of routes are generated. The TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing.

  3. Lazy code has non-modular control flow, so a single function can have many possible control flows depending on the environment it executes in. This phenomena can be seen as some kind of 'control flow polymorphism', so lazy control flow abstractions are more generic than their strict counterparts, and a standard library of higher-order functions is much more useful in a lazy language. Think of Python generators, loops and list iterators: in Haskell list functions cover all three usecases, with control flow adapting to different usage scenarios because of laziness. It is not limited to lists - think of Data.Arrow and iteratees, lazy and strict versions of State monad etc. Also note that non-modular control flow is both an advantage and disadvantage, as it makes reasoning about performance harder.

  4. Lazy possibly infinite data structures are useful beyond toy examples. See works of Conal Elliott on memoizing higher order functions using tries. Infinite data structures appear as infinite search spaces (see 2), infinite loops and never-exhausting generators in Python sense (see 3).

like image 136
nponeccop Avatar answered Oct 08 '22 00:10

nponeccop


Mac OS X's Core Image is a good practical example of lazy evaluation.

Basically, Core Image lets you create a directed acyclic graph of image generators and filters. No evaluation actually takes place until the last step in the process: materialization. When you request to materialize a Core Image graph, the final image frame is propagated backwards through the graph's transformations, thus minimizing the quantity of actual pixel values that need to be evaluated.

like image 25
smokris Avatar answered Oct 07 '22 23:10

smokris