Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does non-strict and lazy differ?

People also ask

What is non-strict function?

Operationally, a strict function is one that always evaluates its argument; a non-strict function is one that might not evaluate some of its arguments. Functions having more than one parameter can be strict or non-strict in each parameter independently, as well as jointly strict in several parameters simultaneously.

What is the opposite of lazy evaluation?

The opposite of lazy evaluation is eager evaluation, sometimes known as strict evaluation. Eager evaluation is the evaluation strategy employed in most programming languages.

Is Haskell lazy?

Haskell is a lazy language. It does not evaluate expressions until it absolutely must. This frequently allows our programs to save time by avoiding unnecessary computation, but they are at more of a risk to leak memory. There are ways of introducing strictness into our programs when we don't want lazy evaluation.

What is lazy code?

To avoid redundant calculations, reduce code size, or save resources, code mobility optimizations move computations across a control-flow graph (CFG).


Non-strict and lazy, while informally interchangeable, apply to different domains of discussion.

Non-strict refers to semantics: the mathematical meaning of an expression. The world to which non-strict applies has no concept of the running time of a function, memory consumption, or even a computer. It simply talks about what kinds of values in the domain map to which kinds of values in the codomain. In particular, a strict function must map the value ⊥ ("bottom" -- see the semantics link above for more about this) to ⊥; a non strict function is allowed not to do this.

Lazy refers to operational behavior: the way code is executed on a real computer. Most programmers think of programs operationally, so this is probably what you are thinking. Lazy evaluation refers to implementation using thunks -- pointers to code which are replaced with a value the first time they are executed. Notice the non-semantic words here: "pointer", "first time", "executed".

Lazy evaluation gives rise to non-strict semantics, which is why the concepts seem so close together. But as FUZxxl points out, laziness is not the only way to implement non-strict semantics.

If you are interested in learning more about this distinction, I highly recommend the link above. Reading it was a turning point in my conception of the meaning of computer programs.


An example for an evaluation model, that is neither strict nor lazy: optimistic evaluation, which gives some speedup as it can avoid a lot of "easy" thunks:

Optimistic evaluation means that even if a subexpression may not be needed to evaluate the superexpression, we still evaluate some of it using some heuristics. If the subexpression doesn't terminate quickly enough, we suspend its evaluation until it's really needed. This gives us an advantage over lazy evaluation if the subexpression is needed later, as we don't need to generate a thunk. On the other hand, we don't lose too much if the expression doesn't terminate, as we can abort it quickly enough.

As you can see, this evaluation model is not strict: If something that yields _|_ is evaluated, but not needed, the function will still terminate, as the engine aborts the evaluation. On the other hand, it may be possible that more expressions than needed are evaluated, so it's not completely lazy.


Yes, there is some unclear use of terminology here, but the terms coincide in most cases regardless, so it's not too much of a problem.

One major difference is when terms are evaluated. There are multiple strategies for this, ranging on a spectrum from "as soon as possible" to "only at the last moment". The term eager evaluation is sometimes used for strategies leaning toward the former, while lazy evaluation properly refers to a family of strategies leaning heavily toward the latter. The distinction between "lazy evaluation" and related strategies tend to involve when and where the result of evaluating something is retained, vs. being tossed aside. The familiar memoization technique in Haskell of assigning a name to a data structure and indexing into it is based on this. In contrast, a language that simply spliced expressions into each other (as in "call-by-name" evaluation) might not support this.

The other difference is which terms are evaluated, ranging from "absolutely everything" to "as little as possible". Since any value actually used to compute the final result can't be ignored, the difference here is how many superfluous terms are evaluated. As well as reducing the amount of work the program has to do, ignoring unused terms means that any errors they would have generated won't occur. When a distinction is being drawn, strictness refers to the property of evaluating everything under consideration (in the case of a strict function, for instance, this means the terms it's applied to. It doesn't necessarily mean sub-expressions inside the arguments), while non-strict means evaluating only some things (either by delaying evaluation, or by discarding terms entirely).

It should be easy to see how these interact in complicated ways; decisions are not at all orthogonal, as the extremes tend to be incompatible. For instance:

  • Very non-strict evaluation precludes some amount of eagerness; if you don't know whether a term will be needed, you can't evaluate it yet.

  • Very strict evaluation makes non-eagerness somewhat irrelevant; if you're evaluating everything, the decision of when to do so is less significant.

Alternate definitions do exist, though. For instance, at least in Haskell, a "strict function" is often defined as one that forces its arguments sufficiently that the function will evaluate to _|_ ("bottom") whenever any argument does; note that by this definition, id is strict (in a trivial sense), because forcing the result of id x will have exactly the same behavior as forcing x alone.


This started out as an update but it started to get long.

Laziness / Call-by-need is a memoized version of call-by-name where, if the function argument is evaluated, that value is stored for subsequent uses. In a "pure" (effect-free) setting, this produces the same results as call-by-name; when the function argument is used two or more times, call-by-need is almost always faster.
Imperative Example - Apparently this is possible. There is an interesting article on Lazy Imperative Languages. It says there are two methods. One requires closures the second uses graph reductions. Since C does not support closures you would need to explicitly pass an argument to your iterator. You could wrap a map structure and if the value does not exist calculate it otherwise return value.
Note: Haskell implements this by "pointers to code which are replaced with a value the first time they are executed" - luqui.
This is non-strict call-by-name but with sharing/memorization of the results.

Call-By-Name - In call-by-name evaluation, the arguments to a function are not evaluated before the function is called — rather, they are substituted directly into the function body (using capture-avoiding substitution) and then left to be evaluated whenever they appear in the function. If an argument is not used in the function body, the argument is never evaluated; if it is used several times, it is re-evaluated each time it appears.
Imperative Example: callbacks
Note: This is non-strict as it avoids evaluation if not used.

Non-Strict = In non-strict evaluation, arguments to a function are not evaluated unless they are actually used in the evaluation of the function body.
Imperative Example: short-circuiting
Note: _|_ appears to be a way to test if a function is non-strict

So a function can be non-strict but not lazy. A function that is lazy is always non-strict. Call-By-Need is partly defined by Call-By-Name which is partly defined by Non-Strict

An Excerpt from "Lazy Imperative Languages"

2.1. NON-STRICT SEMANTICS VS. LAZY EVALUATION We must first clarify the distinction between "non-strict semantics" and "lazy evaluation". Non-strictsemantics are those which specify that an expression is not evaluated until it is needed by a primitiveoperation. There may be various types of non-strict semantics. For instance, non-strict procedure calls donot evaluate the arguments until their values are required. Data constructors may have non-strictsemantics, in which compound data are assembled out of unevaluated pieces Lazy evaluation, also called delayed evaluation, is the technique normally used to implement non-strictsemantics. In section 4, the two methods commonly used to implement lazy evaluation are very brieflysummarized.

CALL BY VALUE, CALL BY LAZY, AND CALL BY NAME "Call by value" is the general name used for procedure calls with strict semantics. In call by valuelanguages, each argument to a procedure call is evaluated before the procedure call is made; the value isthen passed to the procedure or enclosing expression. Another name for call by value is "eager" evaluation.Call by value is also known as "applicative order" evaluation, because all arguments are evaluated beforethe function is applied to them."Call by lazy" (using William Clinger's terminology in [8]) is the name given to procedure calls which usenon-strict semantics. In languages with call by lazy procedure calls, the arguments are not evaluatedbefore being substituted into the procedure body. Call by lazy evaluation is also known as "normal order"evaluation, because of the order (outermost to innermost, left to right) of evaluation of an expression."Call by name" is a particular implementation of call by lazy, used in Algol-60 [18]. The designers ofAlgol-60 intended that call-by-name parameters be physically substituted into the procedure body, enclosedby parentheses and with suitable name changes to avoid conflicts, before the body was evaluated.

CALL BY LAZY VS. CALL BY NEED Call by need is an extension of call by lazy, prompted by the observation that a lazy evaluation could beoptimized by remembering the value of a given delayed expression, once forced, so that the value need notbe recalculated if it is needed again. Call by need evaluation, therefore, extends call by lazy methods byusing memoization to avoid the need for repeated evaluation. Friedman and Wise were among the earliestadvocates of call by need evaluation, proposing "suicidal suspensions" which self-destructed when theywere first evaluated, replacing themselves with their values.