I just read:
Lazy Evaluation in C++
and noticed it's kind of old and most of the answers regard pre-2011 C++. These days we have syntactic lambdas, which can even deduce the return type, so lazy evaluation seems to boil down to just passing them around: Instead of
auto x = foo();
you execute
auto unevaluted_x = []() { return foo(); };
and then evaluate when/where you need to:
auto x = unevaluted_x();
Seems like there's nothing more to it. However, one of the answers there suggests using futures with asynchronous launching. Can someone lay out why/if futures are significant for lazy-evaluation work, in C++ or more abstractly? It seems as though futures may very well be evaluated eagerly, but simply, say, on another thread, and perhaps with less priority than whatever created them; and anyway, it should be implementation-dependent, right?
Also, are there other modern C++ constructs which are useful to keep in mind in the context of lazy evaluation?
In programming language theory, lazy evaluation, or call-by-need, is an evaluation strategy which delays the evaluation of an expression until its value is needed (non-strict evaluation) and which also avoids repeated evaluations (sharing).
Lazy evaluation is a programming strategy that allows a symbol to be evaluated only when needed. In other words, a symbol can be defined (e.g in a function), and it will only be evaluated when it is needed (and that moment can be never). This is why you can do: plop <- function(a, b){ a * 10.
In a nutshell, lazy evaluation means that the object is evaluated when it is needed, not when it is created. In Python 2, range will return a list - this means that if you give it a large number, it will calculate the range and return at the time of creation: >>> i = range(100) >>> type(i) <type 'list'>
When you write
auto unevaluted_x = []() { return foo(); };
...
auto x = unevaluted_x();
Each time you want to get the value (when you call unevaluated_x
) it's calculated, wasting computational resources. So, to get rid of this excessive work, it's a good idea to keep track whether or not the lambda has already been called (maybe in other thread, or in a very different place in the codebase). To do so, we need some wrapper around lambda:
template<typename Callable, typename Return>
class memoized_nullary {
public:
memoized_nullary(Callable f) : function(f) {}
Return operator() () {
if (calculated) {
return result;
}
calculated = true;
return result = function();
}
private:
bool calculated = false;
Return result;
Callable function;
};
Please note that this code is just an example and is not thread safe.
But instead of reinventing the wheel, you could just use std::shared_future
:
auto x = std::async(std::launch::deferred, []() { return foo(); }).share();
This requires less code to write and supports some other features (like, check whether the value has already been calculated, thread safety, etc).
There's the following text in the standard [futures.async, (3.2)]:
If
launch::deferred
is set in policy, storesDECAY_COPY(std::forward<F>(f))
andDECAY_COPY(std::forward<Args>(args))...
in the shared state. These copies off
andargs
constitute a deferred function. Invocation of the deferred function evaluatesINVOKE(std::move(g), std::move(xyz))
whereg
is the stored value ofDECAY_COPY(std::forward<F>(f))
andxyz
is the stored copy ofDECAY_COPY(std::forward<Args>(args))....
Any return value is stored as the result in the shared state. Any exception propagated from the execution of the deferred function is stored as the exceptional result in the shared state. The shared state is not made ready until the function has completed. The first call to a non-timed waiting function (30.6.4) on an asynchronous return object referring to this shared state shall invoke the deferred function in the thread that called the waiting function. Once evaluation ofINVOKE(std::move(g),std::move(xyz))
begins, the function is no longer considered deferred. [ Note: If this policy is specified together with other policies, such as when using a policy value oflaunch::async | launch::deferred
, implementations should defer invocation or the selection of the policy when no more concurrency can be effectively exploited. —end note ]
So, you have a guarantee the calculation will not be called before it's needed.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With