I've been using eigen3 linear algebra library in c++ for a while, and I've always tried to take advantage of the vectorization performance benefits. Today, I've decided to test how much vectorization really speeds my programs up. So, I've written the following test program:
--- eigentest.cpp ---
#include <eigen3/Eigen/Dense>
using namespace Eigen;
#include <iostream>
int main() {
Matrix4d accumulator=Matrix4d::Zero();
Matrix4d randMat = Matrix4d::Random();
Matrix4d constMat = Matrix4d::Constant(2);
for(int i=0; i<1000000; i++) {
randMat+=constMat;
accumulator+=randMat*randMat;
}
std::cout<<accumulator(0,0)<<"\n"; // To avoid optimizing everything away
return 0;
}
Then I've run this program after compiling it with different compiler options: (The results aren't one-time, many runs give similar results)
$ g++ eigentest.cpp -o eigentest -DNDEBUG -std=c++0x -march=native
$ time ./eigentest
5.33334e+18
real 0m4.409s
user 0m4.404s
sys 0m0.000s
$ g++ eigentest.cpp -o eigentest -DNDEBUG -std=c++0x
$ time ./eigentest
5.33334e+18
real 0m4.085s
user 0m4.040s
sys 0m0.000s
$ g++ eigentest.cpp -o eigentest -DNDEBUG -std=c++0x -march=native -O3
$ time ./eigentest
5.33334e+18
real 0m0.147s
user 0m0.136s
sys 0m0.000s
$ g++ eigentest.cpp -o eigentest -DNDEBUG -std=c++0x -O3
$time ./eigentest
5.33334e+18
real 0m0.025s
user 0m0.024s
sys 0m0.000s
And here's my relevant cpu information:
model name : AMD Athlon(tm) 64 X2 Dual Core Processor 5600+
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow extd_apicid pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dn
I know that there's no vectorization going on when I don't use the compiler option -march=native
because when I don't use it, I never get a segmentation fault, or wrong result due to vectorization, as opposed to the case that I use it (with -NDEBUG
).
These results lead me to believe that, at least on my CPU vectorization with eigen3 results in slower execution. Who should I blame? My CPU, eigen3 or gcc?
Edit: To remove any doubts, I've now tried to add the -DEIGEN_DONT_ALIGN
compiler option in cases where I'm trying to measure the performance of the no-vectorization case, and the results are the same. Furthermore, when I add -DEIGEN_DONT_ALIGN
along with -march=native
the results become very close to the case without -march=native
.
It seems that the compiler is smarter than you think and still optimizes a lot of stuff away.
On my platform, I get about 9ms without -march=native
and about 39ms with -march=native
. However, if I replace the line above the return by
std::cout<<accumulator<<"\n";
then the timings change to 78ms without -march=native
and about 39ms with -march=native
.
Thus, it seems that without vectorization, the compiler realizes that you only use the (0,0) element of the matrix and so it only computes that element. However, it can't do that optimization if vectorization is enabled.
If you output the whole matrix, thus forcing the compiler to compute all the entries, then vectorization speeds up the program with a factor 2, as expected (though I'm surprised to see that it is exactly a factor 2 in my timings).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With