I'm trying to learn how to execute "embarrassingly" parallel tasks in C++11. A common pattern I come across is to get the result of a function when evaluated over a range of values, similar to calling python's multiprocessing.Pool.map
. I've written a minimal example that shows what I know how to do, namely call a single process and wait for the result. How can I "map" this call asynchronously and wait until all values are done? Ideally, I'd like the results in a vector of the same length and order as the original.
#include <iostream>
#include <thread>
#include <future>
#include <vector>
using namespace std;
double square_add(double x, double y) { return x*x+y; }
int main() {
vector<double> A = {1,2,3,4,5};
// Single evaluation
auto single_result = std::async(square_add,A[2],3);
cout << "Evaluating a single index " << single_result.get() << endl;
// Blocking map
for(auto &x:A) {
auto blocking_result = std::async(square_add,x,3);
cout << "Evaluating a single index " << blocking_result.get() << endl;
}
// Non-blocking map?
return 0;
}
Note: to get this code to compile with gcc
I need the -pthreads
flag.
std::async returns a future, so you can store them in a vector for later consumption:
std::vector<std::future<double>> future_doubles;
future_doubles.reserve(A.size());
for (auto& x : A) {
// Might block, but also might not.
future_doubles.push_back(std::async(square_add, x, 3));
}
// Now block on all of them one at a time.
for (auto& f_d : future_doubles) {
std::cout << f_d.get() << std::endl;
}
Now the above code might or might not run asynchronously. It's up to the implementation/system to decide whether it's worth it to perform the task asynchronously or not. If you want to force it to run in separate threads, you can pass an optional launch_policy to std::async, changing the call to
future_doubles.push_back(std::async(std::launch::async, square_add, x, 3));
For more info on std::async
and the various policies, see here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With