Consider the following code.
const int N = 100;
const float alpha = 0.9;
Eigen::MatrixXf myVec = Eigen::MatrixXf::Random(N,1);
Eigen::MatrixXf symmetricMatrix(N, N);
for(int i=0; i<N; i++)
for(int j=0; j<=i; j++)
symmetricMatrix(i,j) = symmetricMatrix(j,i) = i+j;
symmetricMatrix *= alpha;
symmetricMatrix += ((1-alpha)*myVec*myVec.adjoint());
It essentially implements the exponential averaging. I know that the last line may be optimized in the following way.
symmetricMatrix_copy.selfadjointView<Eigen::Upper>().rankUpdate(myVec, 1-alpha);
I would like to know whether I can combine the last two lines in an efficient way.
In short, I would like to compute A = alpha*A+(1-alpha)*(x*x')
.
Most importantly, you should declare myVec
as Eigen::VectorXf
, if it is guaranteed to be a vector. And make sure you compile with -O3 -march=native -DNDEBUG
.
You can try these alternatives (I'm using A
and v
to save typing), which one is the fastest likely depends on your problem-size and your CPU:
A.noalias() = alpha * A + (1.0f-alpha)*v*v.adjoint();
A.noalias() = alpha * A + (1.0f-alpha)*v.lazyProduct(v.adjoint());
A.noalias() = alpha * A + ((1.0f-alpha)*v).lazyProduct(v.adjoint());
A.noalias() = alpha * A + v.lazyProduct((1.0f-alpha)*v.adjoint());
A.triangularView<Eigen::Upper>() = alpha * A + (1.0f-alpha)*v*v.adjoint();
// or any `lazyProduct` as above.
Unfortunately, .noalias()
and .triangularView()
can't be combined at the moment.
You can also consider computing this:
A.selfadjointView<Eigen::Upper>().rankUpdate(v, (1.0f-alpha)/alpha);
and every N
iterations scale your A
matrix by pow(alpha, N)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With