I was wondering if there was a more efficient way of writing a = a + b + c?
thrust::transform(b.begin(), b.end(), c.begin(), b.begin(), thrust::plus<int>());
thrust::transform(a.begin(), a.end(), b.begin(), a.begin(), thrust::plus<int>());
This works but is there a way to get the same effect using just one line of code? I looked at the saxpy implementation in the examples, however this uses 2 vectors and a constant value;
Is this more efficient?
struct arbitrary_functor
{
template <typename Tuple>
__host__ __device__
void operator()(Tuple t)
{
// D[i] = A[i] + B[i] + C[i];
thrust::get<3>(t) = thrust::get<0>(t) + thrust::get<1>(t) + thrust::get<2>(t);
}
};
int main(){
// allocate storage
thrust::host_vector<int> A;
thrust::host_vector<int> B;
thrust::host_vector<int> C;
// initialize input vectors
A.push_back(10);
B.push_back(10);
C.push_back(10);
// apply the transformation
thrust::for_each(thrust::make_zip_iterator(thrust::make_tuple(A.begin(), B.begin(), C.begin(), A.begin())),
thrust::make_zip_iterator(thrust::make_tuple(A.end(), B.end(), C.end(), A.end())),
arbitrary_functor());
// print the output
std::cout << A[0] << std::endl;
return 0;
}
a = a + b + c
has low arithmetic intensity (only two arithmetic operations for every 4 memory operations), so the computation is going to be memory bandwidth bound. To compare the efficiency of your proposed solutions, we need to measure their bandwidth demands.
Each call to transform
in the first solution requires two loads and one store for each call to plus
. So we can model the cost of each transform
call as 3N
, where N
is the size of the vectors a
, b
, and c
. Since there are two invocations of transform
, the cost of this solution is 6N
.
We can model the cost of the second solution in the same way. Each invocation of arbitrary_functor
requires three loads and one store. So a cost model for this solution would be 4N
, which implies that the for_each
solution should be more efficient than calling transform
twice. When N
is large, the second solution should perform 6N/4N = 1.5x
faster than the first.
Of course, you could always combine zip_iterator
with transform
in a similar way to avoid two separate calls to transform
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With