Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to do the following matrix multiplication more efficient in Matlab?

I was wondering if there is any way of doing the following matrix multiplication more efficient, as for example vectorizing it:

m=1000;n=500;
a=zeros(n,1);
b=rand(n,1);
A=rand(m,n);
B=rand(m,m);
for i=1:n
    a(i)=b'*(A'*B(i,:)'*B(i,:)*A)*b;
end

Thanks in advance

like image 983
Ignacio Avatar asked Feb 09 '23 12:02

Ignacio


2 Answers

I suggest reordering the matrix multiplications to have B and B' on the sides, and using diag to get the diagonal elements:

m=1000;n=500;
a=zeros(n,1);
b=rand(n,1);
A=rand(m,n);
B=rand(m,m);

%original for comparison
for i=1:n
    a(i)=b'*(A'*B(i,:)'*B(i,:)*A)*b;
end

%new version
%a2=diag(B*A*b*b'*A'*B');
%a2=a2(1:n); %emulate original cut-off

%new new version
a2=diag(B(1:n,:)*A*b*b'*A'*B(1:n,:)');

%compare difference
max(abs(a-a2)) %absolute error
max(abs(a-a2))./max(abs(a)) %relative error

The absolute error seems large compared to rand():

>> max(abs(a-a2)) %absolute error

ans =

   8.1062e-06

but the relative error shows that we are correct to machine precision:

>> max(abs(a-a2))./max(abs(a)) %relative error

ans =

   1.9627e-15

The maths behind the rearranging is that a single term in your original sum,

b'*(A'*B(i,:)'*B(i,:)*A)*b

is a series of matrix products, if you think of vectors as row/column matrices. And since the first and last dimensions (through b' and b) are both 1, you get a scalar result for each i. If you think of this scalar as a 1 x 1 matrix, you can substitute it with its own trace. And, matrices can be cyclically permuted under a trace:

Tr (b' * A * Bi' * Bi * A * b) = Tr (Bi * A * b * b' * A * Bi')

where for simplicity Bi stands for the ith row of B. But what we have right now inside the trace is again a scalar, so we can remove the trace. So for each i you need

a(i)=B(i,:) * A * b * b' * A * B(i,:)';

which is clearly the (i,i) (diagonal) component of the matrix

B * A * b * b' * A * B'
like image 192

You can greatly reduce the number of operations using associativity and the transpose-product property:

t = B*A*b;
a = abs(t).^2;

This works because the original expression b'*(A'*B(i,:)'*B(i,:)*A)*b equals b'*A'*B(i,:)'*B(i,:)*A*b (by associativity), which equals (B(i,:)*a*b)'*B(i,:)*A*b (by the transpose-product property); and the latter can be computed for all i as (B*a*b)'*B*A*b.

If your matrices contain real numbers, you can perhaps gain some speed removing abs.

like image 30
Luis Mendo Avatar answered Feb 11 '23 22:02

Luis Mendo