I am writing a program which requires to multiply hundreds of matrices in parallel using CUDA. Can somebody explain how to perform this operation.
I have seen that Kepler architecture is capable of dynamic parallelism. Has somebody used this architecture and if yes, which Nvidia graphics card.
The easiest way to get fast performing matrix multiply in parallel using CUDA is through the ArrayFire CUDA library using the GFOR loop. Here's some code that does what you want:
int n = 8, int m = 8; // dimensions
int t = 10; // number of different matricies
array A = randu(m,n,t); // many matricies
array B = randu(m,n); // one matrix
array C = zeros(m,n,t); // destination
// multiply C=A*B for all A, at the same time
gfor (array i, A.dims(2)) {
C(span,span,i) = matmul(A(span,span,i), B);
}
print( A );
print( B );
print( C );
ArrayFire automatically tiles out the computation efficiently for execution on the GPU. All that is optimized behind the scenes for you. I find it to be faster than trying to write it by hand myself.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With