One way to get a std::future
is through std::async
:
int foo()
{
return 42;
}
...
std::future<int> x = std::async(foo);
In this example, how is the storage for x
's asynchronous state allocated, and which thread (if more than one thread is involved) is responsible for performing the allocation? Moreover, does a client of std::async
have any control over the allocation?
For context, I see that one of the constructors of std::promise
may receive an allocator, but it's not clear to me if it is possible to customize the allocation of the std::future
at the level of std::async
.
Using std::list will cause a memory allocation each time you add an item to the list.
C++ allows us to allocate the memory of a variable or an array in run time. This is known as dynamic memory allocation. In other programming languages such as Java and Python, the compiler automatically manages the memories allocated to variables.
C++ supports dynamic memory management, which means you as the programmer are responsible for allocating and deallocating memory. On the other hand, automatic memory management means the programming language automates this process by performing memory allocation and deallocation for you.
delete and free() in C++ In C++, the delete operator should only be used either for the pointers pointing to the memory allocated using new operator or for a NULL pointer, and free() should only be used either for the pointers pointing to the memory allocated using malloc() or for a NULL pointer. It is an operator.
The memory is allocated by the thread that calls std::async
, and you have no control over how it is done. Typically it will be done by some variant of new __internal_state_type
, but there is no guarantee; it may use malloc
, or an allocator specifically chosen for the purpose.
From 30.6.8p3 [futures.async]:
"Effects: The first function behaves the same as a call to the second function with a policy argument of
launch::async | launch::deferred
and the same arguments forF
andArgs
. The second function creates a shared state that is associated with the returned future object. ..."
The "first function" is the overload without a launch policy, whilst the second is the overload with a launch policy.
In the case of std::launch::deferred
, there is no other thread, so everything must happen on the calling thread. In the case of std::launch::async
, 30.6.8p3 goes on to say:
— if
policy & launch::async
is non-zero — callsINVOKE (DECAY_COPY (std::forward<F>(f)), DECAY_COPY (std::forward<Args>(args))...)
(20.8.2, 30.3.1.2) as if in a new thread of execution represented by a thread object with the calls toDECAY_COPY ()
being evaluated in the thread that calledasync
. ...
I've added the emphasis. Since the copy of the function and arguments has to happen in the calling thread, this essentially requires that the shared state is allocated by the calling thread.
Of course, you could write an implementation that started the new thread, waited for it to allocate the state, and then returned a future
that referenced that, but why would you?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With