Context: I am writing a library that exposes custom allocators in many stdlib data structures for users who want to customize memory allocation for real-time performance.
I want to use a custom allocator with std::promise
and std::future
. My understanding is that when an allocator is passed to std::promise
, its future
object also uses that custom allocator.
My test overrides global new and delete to track the number of times the default operators are called. I also implement a custom allocator that uses malloc
and free
but uses different state to count the allocations/deallocations (this would be replaced with a real-time safe allocator in a real example).
It appears that when I call std::promise::set_value
for a "large" object, global operator new and delete are called, even though the promise
was constructed with a custom allocator.
Here is a basic example. (Allocator boilerplate removed for brevity, you can see the full, compilable version on Gist: https://gist.github.com/jacquelinekay/a4a1a282108a55d545a9)
struct Foo {
std::vector<int, InstrumentedAllocator<int>> bar;
};
int main(int argc, char ** argv) {
(void) argc;
(void) argv;
InstrumentedAllocator<void> alloc;
std::promise<Foo> promise_(std::allocator_arg, alloc);
std::shared_future<Foo> future_ = promise_.get_future().share();
// Start a thread that blocks for a few ms and sets the future value
std::thread result_thread(
[&promise_]() {
Foo result;
result.bar.push_back(1);
result.bar.push_back(2);
result.bar.push_back(3);
// test_init starts counting calls to global new/delete
// (stored in variables global_runtime_allocs/deallocs)
test_init = true;
std::this_thread::sleep_for(std::chrono::milliseconds(5));
promise_.set_value(result);
test_init = false;
}
);
future_.wait();
result_thread.join();
std::cout << "Runtime global allocations: " << global_runtime_allocs << " (expected: 0)" << std::endl;
std::cout << "Runtime global deallocations: " << global_runtime_deallocs << " (expected: 0)" << std::endl;
}
The global operator new for this example also prints the size of the "runtime" allocation (from std::promise::set_value
), resulting in this output:
$ clang++ promise_allocator.cpp -std=c++11 -lpthread
$ ./a.out
Allocation size: 16
Runtime global allocations: 1 (expected: 0)
Runtime global deallocations: 1 (expected: 0)
I get the same results on gcc 4.8 and Clang 3.4. Is this the correct interpretation of the standard? I would expect set_value
to use the promise
's allocator.
This appears to be a bug in the 4.9 release series libstdc++ that is fixed in the version 5 release series. Running your gist on Wandbox with version 5.1 or higher produces only the output:
Runtime global allocations: 0 (expected: 0) Runtime global deallocations: 0 (expected: 0)
With a combination of debugger backtracing and combing through GCC's stdlib implementation, I've figured out why this happens, though I don't have a solution or workaround for myself.
std::promise::set_value
calls an internal function of its future, future::_M_set_result
. [1] Passing the function object __res
into this function calls the constructor of _Function_base
, perhaps because the signature of the function does not pass __res_
by reference. [2] The constructor of _Function_base
calls _M_init_functor
, which either does a placement new if the function is using local storage or allocates a new object. [3] For some reason that I still haven't determined, the function used internally by future
does NOT use local storage and therefore allocates in the constructor.
From skimming the working draft of the standard that I could find [4], the standard not specific about the expected behavior of allocations in promise
. However, it is inconvenient that I can't control the allocation behavior of the function used internally by promise
, and I will probably file a bug in gcc about it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With