Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Lazy Evaluation vs Macros

I'm used to lazy evaluation from Haskell, and find myself getting irritated with eager-by-default languages now that I've used lazy evaluation properly. This is actually quite damaging, as the other languages I use mainly make lazily evaluating stuff very awkward, normally involving the rolling out of custom iterators and so forth. So just by acquiring some knowledge, I've actually made myself less productive in my original languages. Sigh.

But I hear that AST macros offer another clean way of doing the same thing. I've often heard statements like 'Lazy evaluation makes macros redundant' and vice-versa, mostly from sparring Lisp and Haskell communities.

I've dabbled with macros in various Lisp variants. They just seemed like a really organized way of copy and pasting chunks of code around to be handled at compile time. They certainly weren't the holy grail that Lispers like to think it is. But that's almost certainly because I can't use them properly. Of course, having the macro system work on the same core data structure that the language itself is assembled with is really useful, but it's still basically an organized way of copy-and-pasting code around. I acknowledge that basing a macro system on the same AST as the language that allows full runtime alteration is powerful.

What I want to know is, is how can macros be used to concisely and succinctly do what lazy-evaluation does? If I want to process a file line by line without slurping up the whole thing, I just return a list that's had a line-reading routine mapped over it. It's the perfect example of DWIM (do what I mean). I don't even have to think about it.

I clearly don't get macros. I've used them and not been particularly impressed given the hype. So there's something I'm missing that I'm not getting by reading over documentation online. Can someone explain all of this to me?

like image 476
Louis Avatar asked Aug 12 '11 22:08

Louis


People also ask

Is lazy evaluation always better?

Lazy evaluation's is not always better. The performance benefits of lazy evaluation can be great, but it is not hard to avoid most unnecessary evaluation in eager environments- surely lazy makes it easy and complete, but rarely is unnecessary evaluation in code a major problem.

Are lazy evaluations faster?

Lazy evaluation is not, in general, faster.

What is lazy evaluation What are the advantages of lazy evaluation?

Lazy Evaluation − AdvantagesIt reduces the time complexity of an algorithm by discarding the temporary computations and conditionals. It allows the programmer to access components of data structures out-of-order after initializing them, as long as they are free from any circular dependencies.

What is lazy evaluation in C++?

The story about lazy evaluation in C++ is quite short. That will change in C++20 with the ranges library from Eric Niebler. Lazy evaluation is the default in Haskell. Lazy evaluation means that an expression is only evaluated when needed.


2 Answers

Lazy evaluation can substitute for certain uses of macros (those which delay evaluation to create control constructs) but the converse isn't really true. You can use macros to make delayed evaluation constructs more transparent -- see SRFI 41 (Streams) for an example of how: http://download.plt-scheme.org/doc/4.1.5/html/srfi-std/srfi-41/srfi-41.html

On top of this, you could write your own lazy IO primitives as well.

In my experience, however, pervasively lazy code in a strict language tends to introduce an overhead as compared to pervasively lazy code in a runtime designed to efficiently support it from the start -- which, mind you, is an implementation issue really.

like image 29
sclv Avatar answered Sep 19 '22 17:09

sclv


Lazy evaluation makes macros redundant

This is pure nonsense (not your fault; I've heard it before). It's true that you can use macros to change the order, context, etc. of expression evaluation, but that's the most basic use of macros, and it's really not convenient to simulate a lazy language using ad-hoc macros instead of functions. So if you came at macros from that direction, you would indeed be disappointed.

Macros are for extending the language with new syntactic forms. Some of the specific capabilities of macros are

  1. Affecting the order, context, etc. of expression evaluation.
  2. Creating new binding forms (i.e. affecting the scope an expression is evaluated in).
  3. Performing compile-time computation, including code analysis and transformation.

Macros that do (1) can be pretty simple. For example, in Racket, the exception-handling form, with-handlers, is just a macro that expands into call-with-exception-handler, some conditionals, and some continuation code. It's used like this:

(with-handlers ([(lambda (e) (exn:fail:network? e))                  (lambda (e)                    (printf "network seems to be broken\n")                    (cleanup))])   (do-some-network-stuff)) 

The macro implements the notion of "predicate-and-handler clauses in the dynamic context of the exception" based on the primitive call-with-exception-handler which handles all exceptions at the point they're raised.

A more sophisticated use of macros is an implementation of an LALR(1) parser generator. Instead of a separate file that needs pre-processing, the parser form is just another kind of expression. It takes a grammar description, computes the tables at compile time, and produces a parser function. The action routines are lexically-scoped, so they can refer to other definitions in the file or even lambda-bound variables. You can even use other language extensions in the action routines.

At the extreme end, Typed Racket is a typed dialect of Racket implemented via macros. It has a sophisticated type system designed to match the idioms of Racket/Scheme code, and it interoperates with untyped modules by protecting typed functions with dynamic software contracts (also implemented via macros). It's implemented by a "typed module" macro that expands, type-checks, and transforms the module body as well as auxiliary macros for attaching type information to definitions, etc.

FWIW, there's also Lazy Racket, a lazy dialect of Racket. It's not implemented by turning every function into a macro, but by rebinding lambda, define, and the function application syntax to macros that create and force promises.

In summary, lazy evaluation and macros have a small point of intersection, but they're extremely different things. And macros are certainly not subsumed by lazy evaluation.

like image 137
Ryan Culpepper Avatar answered Sep 18 '22 17:09

Ryan Culpepper