I have written programs in C++, Python and Java for matrix multiplication and tested their speed for multiplying two 2000 x 2000 matrices (see post). The standard ikj-implentation - which is in - took:
Now I have implemented the Strassen algorithm for matrix multiplication - which is in - in Python and C++ as it was on wikipedia. These are the times I've got:
Why is Strassen matrix multiplication so much slower than standard matrix multiplication?
This is especially astonishing, as it seems to contradict the experiences of others:
edit: The reason why Strassen matrix multiplication was slower in my case, were:
strassen
and strassenRecursive
. The first one resized the matrix to a power of two, if required and called the second one. But strassenRecursive
didn't recursively call itself, but strassen
. In linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm for matrix multiplication. It is faster than the standard matrix multiplication algorithm for large matrices, with a better asymptotic complexity, although the naive algorithm is often better for smaller matrices.
Hence, the complexity of Strassen's matrix multiplication algorithm is O(nlog7).
Strassen's matrix multiplication algorithm also has a few disadvantages: Recursion stack consumes more memory. The recursive calls add latency.
Because MATLAB is a programming language at first developed for numerical linear algebra (matrix manipulations), which has libraries especially developed for matrix multiplications.
The basic problem is that you're recursing down to a leaf size of 1 with your strassen implementaiton. Strassen's algorithm has a better Big O complexity, but constants do matter in reality, which means in reality you're better off with a standard n^3 matrix multiplication for smaller problem sizes.
So to greatly improve your program instead of doing:
if (tam == 1) {
C[0][0] = A[0][0] * B[0][0];
return;
}
use if (tam == LEAF_SIZE) // iterative solution here
. LEAF_SIZE
should be a constant that you have to experimentally determine for your given architecture. Depending on the architecture it may be larger or smaller - there are architectures where the constant factors for strassen are so large that it's basically always worse than a simpler n^3 implementation for sensible matrix sizes. It all depends.
Well, "arithmetic operations" are not the only things that count. It's not like everything else is free.
My naive guess would be that all this memory-allocating and copying beats the gain from having fewer arithmetic operations...
Memory access, in particular, can be quite expensive when it gets out of the cache, In comparison, arihmetic operations could be considered free :-)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With