Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

When not to vectorize matlab?

I'm working on some matlab code which is processing large (but not huge) datasets: 10,000 784 element vectors (not sparse), and calculating information about that which is stored in a 10,000x10 sparse matrix. In order to get the code working I did some of the trickier parts iteratively, doing loops over the 10k items to process them, and a few a loop over the 10 items in the sparse matrix for cleanup.

My process initially took 73 iterations (so, on the order of 730k loops) to process, and ran in about 120 seconds. Not bad, but this is matlab, so I set out to vectorize it to speed it up.

In the end I have a fully vectorized solution which gets the same answer (so it's correct, or at least as correct as my initial solution), but takes 274 seconds to run, it's almost half as fast!

This is the first time I've ran into matlab code which runs slower vectorized than it does iteratively. Are there any rules of thumb or best practices for identifying when this is likely / possible?

I'd love to share the code for some feedback, but it's for a currently open school assignment so I really can't right now. If it ends up being one of those "Wow, that's weird, you probably did something wrong things" I'll probably revisit this in a week or two to see if my vectorization is somehow off.

like image 976
Donnie Avatar asked Nov 22 '09 01:11

Donnie


2 Answers

Vectorisation in Matlab often means allocating a lot more memory (making a much larger array to avoid the loop eg by tony's trick). With improved JIT compiling of loops in recent versions - its possible that the memory allocation required for your vectorised solution means there is no advantage, but without seeing the code it's hard to say. Matlab has an excellent line-by-line profiler which should help you see which particular parts of the vectorised version are taking the time.

like image 54
robince Avatar answered Oct 21 '22 16:10

robince


Have you tried plotting the execution time as a function of problem size (either the number of elements per vector [currently 784], or the number of vectors [currently 10,000])? I ran into a similar anomaly when vectorizing a Gram-Schmidt orthogonalization algorithm; it turned out that the vectorized version was faster until the problem grew to a certain size, at which point the iterative version actually ran faster, as seen in this plot: Execution time comparison between vectorized and unvectorized implementations of the Gram-Schmidt orthogonalization algorithm

Here are the two implementations and the benchmarking script:

clgs.m

function [Q,R] = clgs(A)
% QR factorization by unvectorized classical Gram-Schmidt orthogonalization

[m,n] = size(A);

R = zeros(n,n);     % pre-allocate upper-triangular matrix

% iterate over columns
for j = 1:n
    v = A(:,j);

    % iterate over remaining columns
    for i = 1:j-1
        R(i,j) = A(:,i)' * A(:,j);
        v = v - R(i,j) * A(:,i);
    end

    R(j,j) = norm(v);
    A(:,j) = v / norm(v);   % normalize
end
Q = A;

clgs2.m

function [Q,R] = clgs2(A)
% QR factorization by classical Gram-Schmidt orthogonalization with a
% vectorized inner loop

[m,n] = size(A);
R = zeros(n,n);     % pre-allocate upper-triangular matrix

for k=1:n
    R(1:k-1,k) = A(:,1:k-1)' * A(:,k);
    A(:,k) = A(:,k) - A(:,1:k-1) * R(1:k-1,k);
    R(k,k) = norm(A(:,k));
    A(:,k) = A(:,k) / R(k,k);
end

Q = A;

benchgs.m

n = [300,350,400,450,500];

clgs_time=zeros(length(n),1);
clgs2_time=clgs_time;

for i = 1:length(n)
    A = rand(n(i));
    tic;
    [Q,R] = clgs(A);
    clgs_time(i) = toc;

    tic;
    [Q,R] = clgs2(A);
    clgs2_time(i) = toc;
end

semilogy(n,clgs_time,'b',n,clgs2_time,'r')
xlabel 'n', ylabel 'Time [seconds]'
legend('unvectorized CGS','vectorized CGS')
like image 32
las3rjock Avatar answered Oct 21 '22 17:10

las3rjock