I'm investigating ways to speed up a large section of C++ code, which has automatic derivatives for computing jacobians. This involves doing some amount of work in the actual residuals, but the majority of the work (based on profiled execution time) is in calculating the jacobians.
This surprised me, since most of the jacobians are propagated forward from 0s and 1s, so the amount of work should be 2-4x the function, not 10-12x. In order to model what a large amount of the jacobian work is like, I made a super minimal example with just a dot product (instead of sin, cos, sqrt and more that would be in a real situation) that the compiler should be able to optimize to a single return value:
#include <Eigen/Core> #include <Eigen/Geometry> using Array12d = Eigen::Matrix<double,12,1>; double testReturnFirstDot(const Array12d& b) { Array12d a; a.array() = 0.; a(0) = 1.; return a.dot(b); }
Which should be the same as
double testReturnFirst(const Array12d& b) { return b(0); }
I was disappointed to find that, without fast-math enabled, neither GCC 8.2, Clang 6 or MSVC 19 were able to make any optimizations at all over the naive dot-product with a matrix full of 0s. Even with fast-math (https://godbolt.org/z/GvPXFy) the optimizations are very poor in GCC and Clang (still involve multiplications and additions), and MSVC doesn't do any optimizations at all.
I don't have a background in compilers, but is there a reason for this? I'm fairly sure that in a large proportion of scientific computations being able to do better constant propagation/folding would make more optimizations apparent, even if the constant-fold itself didn't result in a speedup.
While I'm interested in explanations for why this isn't done on the compiler side, I'm also interested for what I can do on a practical side to make my own code faster when facing these kinds of patterns.
Python tries to fold every single constant expression present but there are some cases where even though the expression is constant, but Python chooses not to fold it. For example, Python does not fold x = 4 ** 64 while it does fold x = 2 ** 64 .
So, with the same degree of knowledge, humans can produce assembly code that is at least as performant as the compilation result. It can probably be even better, as a human developer can optimize for a given task, while the compiler can only apply generic optimizations.
Constant folding is an optimization technique that eliminates expressions that calculate a value that can already be determined before code execution. These are typically calculations that only reference constant values or expressions that reference variables whose values are constant.
Turning on optimization flags makes the compiler attempt to improve the performance and/or code size at the expense of compilation time and possibly the ability to debug the program. The compiler performs optimization based on the knowledge it has of the program.
This is because Eigen explicitly vectorize your code as 3 vmulpd, 2 vaddpd and 1 horizontal reduction within the remaining 4 component registers (this assumes AVX, with SSE only you'll get 6 mulpd and 5 addpd). With -ffast-math
GCC and clang are allowed to remove the last 2 vmulpd and vaddpd (and this is what they do) but they cannot really replace the remaining vmulpd and horizontal reduction that have been explicitly generated by Eigen.
So what if you disable Eigen's explicit vectorization by defining EIGEN_DONT_VECTORIZE
? Then you get what you expected (https://godbolt.org/z/UQsoeH) but other pieces of code might become much slower.
If you want to locally disable explicit vectorization and are not afraid of messing with Eigen's internal, you can introduce a DontVectorize
option to Matrix
and disable vectorization by specializing traits<>
for this Matrix
type:
static const int DontVectorize = 0x80000000; namespace Eigen { namespace internal { template<typename _Scalar, int _Rows, int _Cols, int _MaxRows, int _MaxCols> struct traits<Matrix<_Scalar, _Rows, _Cols, DontVectorize, _MaxRows, _MaxCols> > : traits<Matrix<_Scalar, _Rows, _Cols> > { typedef traits<Matrix<_Scalar, _Rows, _Cols> > Base; enum { EvaluatorFlags = Base::EvaluatorFlags & ~PacketAccessBit }; }; } } using ArrayS12d = Eigen::Matrix<double,12,1,DontVectorize>;
Full example there: https://godbolt.org/z/bOEyzv
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With