Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Lazy evaluation in C++14/17 - just lambdas or also futures etc.?

I just read:

Lazy Evaluation in C++

and noticed it's kind of old and most of the answers regard pre-2011 C++. These days we have syntactic lambdas, which can even deduce the return type, so lazy evaluation seems to boil down to just passing them around: Instead of

auto x = foo();

you execute

auto unevaluted_x = []() { return foo(); };

and then evaluate when/where you need to:

auto x = unevaluted_x();

Seems like there's nothing more to it. However, one of the answers there suggests using futures with asynchronous launching. Can someone lay out why/if futures are significant for lazy-evaluation work, in C++ or more abstractly? It seems as though futures may very well be evaluated eagerly, but simply, say, on another thread, and perhaps with less priority than whatever created them; and anyway, it should be implementation-dependent, right?

Also, are there other modern C++ constructs which are useful to keep in mind in the context of lazy evaluation?

like image 256
einpoklum Avatar asked Jan 31 '17 10:01

einpoklum


People also ask

What is lazy evaluation in C++?

In programming language theory, lazy evaluation, or call-by-need, is an evaluation strategy which delays the evaluation of an expression until its value is needed (non-strict evaluation) and which also avoids repeated evaluations (sharing).

What is lazy evaluation of function in R?

Lazy evaluation is a programming strategy that allows a symbol to be evaluated only when needed. In other words, a symbol can be defined (e.g in a function), and it will only be evaluated when it is needed (and that moment can be never). This is why you can do: plop <- function(a, b){ a * 10.

What is lazy evaluation Python?

In a nutshell, lazy evaluation means that the object is evaluated when it is needed, not when it is created. In Python 2, range will return a list - this means that if you give it a large number, it will calculate the range and return at the time of creation: >>> i = range(100) >>> type(i) <type 'list'>


1 Answers

When you write

auto unevaluted_x = []() { return foo(); };
...
auto x = unevaluted_x();

Each time you want to get the value (when you call unevaluated_x) it's calculated, wasting computational resources. So, to get rid of this excessive work, it's a good idea to keep track whether or not the lambda has already been called (maybe in other thread, or in a very different place in the codebase). To do so, we need some wrapper around lambda:

template<typename Callable, typename Return>
class memoized_nullary {
public:
    memoized_nullary(Callable f) : function(f) {}
    Return operator() () {
        if (calculated) {
            return result;
        }
        calculated = true;
        return result = function();
    }
private:
    bool calculated = false;
    Return result;
    Callable function;
};

Please note that this code is just an example and is not thread safe.

But instead of reinventing the wheel, you could just use std::shared_future:

auto x = std::async(std::launch::deferred, []() { return foo(); }).share();

This requires less code to write and supports some other features (like, check whether the value has already been calculated, thread safety, etc).

There's the following text in the standard [futures.async, (3.2)]:

If launch::deferred is set in policy, stores DECAY_COPY(std::forward<F>(f)) and DECAY_COPY(std::forward<Args>(args))... in the shared state. These copies of f and args constitute a deferred function. Invocation of the deferred function evaluates INVOKE(std::move(g), std::move(xyz)) where g is the stored value of DECAY_COPY(std::forward<F>(f)) and xyz is the stored copy of DECAY_COPY(std::forward<Args>(args)).... Any return value is stored as the result in the shared state. Any exception propagated from the execution of the deferred function is stored as the exceptional result in the shared state. The shared state is not made ready until the function has completed. The first call to a non-timed waiting function (30.6.4) on an asynchronous return object referring to this shared state shall invoke the deferred function in the thread that called the waiting function. Once evaluation of INVOKE(std::move(g),std::move(xyz)) begins, the function is no longer considered deferred. [ Note: If this policy is specified together with other policies, such as when using a policy value of launch::async | launch::deferred, implementations should defer invocation or the selection of the policy when no more concurrency can be effectively exploited. —end note ]

So, you have a guarantee the calculation will not be called before it's needed.

like image 180
alexeykuzmin0 Avatar answered Sep 19 '22 15:09

alexeykuzmin0