One of the first things one learns about programming efficiently in MATLAB is to avoid dynamically resizing arrays. The standard example is as follows.
N = 1000;
% Method 0: Bad
clear a
for i=1:N
a(i) = cos(i);
end
% Method 1: Better
clear a; a = zeros(N,1);
for i=1:N
a(i) = cos(i)
end
The 'Bad' variant here requires O(N
^2) time to run, as it must allocate a new array and copy the old values at each iteration of the loop.
My own preferred practice when debugging is to allocate an array with NaN
, harder to confuse with a valid value than 0
.
% Method 2: Easier to Debug
clear a; a = NaN(N,1);
for i=1:N
a(i) = cos(i)
end
However, one would naively think that once our code is debugged, we're wasting time by allocating an array and then filling it with 0
or NaN
. As noted here, you can perhaps create an uninitialized array as follows
% Method 3 : Even Better?
clear a; a(N,1) = 0;
for i=1:N
a(i) = cos(i);
end
However, in my own tests (MATLAB R2013a), I notice no appreciable difference between methods 1 and 3, while method 2 takes more time. This suggests that MATLAB has avoided explicitly initializing the array to zero when a = zeros(N,1)
is called.
Thus, I'm curious to know
The test
Using MatLab 2013b I and an Intel Xeon 3.6GHz + 16GB RAM I ran the code below to profile. I distinguished 3 methods and only considered 1D arrays, i.e. vectors. Methods 1 and 2 have been tested using both column vectors and row vectors, i.e. (n,1) and (1,n).
Method 1 (M1R, M1C)
a = zeros(1,n);
Method 2 M2R, M2C
a = NaN(1,n);
Method 3 (M3)
a(n) = 0;
Results
The timing results and number of elements have been plotted on a duuble logarithmic scale in figure timing1D.
As shown the third method has assignment almost independent of vector size while the other steadily increase suggesting an implicit definition of the vector.
Discussion
MatLab does a lot of code optimization using JIT (Just in time), i.e. code optimization during run-time. So it is a valid question to pose whether or not the part of the code running faster is due to programming (always the same whether or not optimized) or due to optimization. To test this optimization can be turned off by using feature('accel','off'). The results of running the code again are rather interesting:
It is shown that now Method 1 is optimal, both for row and column vectors. And method 3 behaves like the other methods in the first test.
Conclusion
Optimizing memory pre-allocation is useless and a waste of time since MatLab will optimize for you anyway.
Note that memory should be pre-allocated but the way in which you do it doesn't matter. The performance of pre-allocating memory is largely dependent on whether or not the JIT compiler of MatLab chooses to optimize your code or not. This is fully dependent on all other content of your .m-file since the compiler considers chunks of codes at the time and then tries to optimize (it even has a memory so that running a file several times might result in an even lower execution-time). Also memory pre-allocation is most often a very short process considering performance compared to the calculation performed afterwards
In my opinion memory should be pre-allocated by either using method 1 or method 2 to maintain a readable code and use the function that MatLab help suggests since these are the most likely to be improved in the future.
Code used
clear all
clc
feature('accel','on')
number1D=30;
nn1D=2.^(1:number1D);
timings1D=zeros(5,number1D);
for ii=1:length(nn1D);
n=nn1D(ii);
% 1D
tic
a = zeros(1,n);
a(randi(n,1))=1;
timings1D(1,ii)=toc;
fprintf('1D row vector method1 took: %f\n',timings1D(1,ii))
clear a
tic
b = zeros(n,1);
b(randi(n,1))=1;
timings1D(2,ii)=toc;
fprintf('1D column vector method1 took: %f\n',timings1D(2,ii))
clear b
tic
c = NaN(1,n);
c(randi(n,1))=1;
timings1D(3,ii)=toc;
fprintf('1D row vector method2 took: %f\n',timings1D(3,ii))
clear c
tic
d = NaN(n,1);
d(randi(n,1))=1;
timings1D(4,ii)=toc;
fprintf('1D row vector method2 took: %f\n',timings1D(4,ii))
clear d
tic
e(n) = 0;
e(randi(n,1))=1;
timings1D(5,ii)=toc;
fprintf('1D row vector method3 took: %f\n',timings1D(5,ii))
clear e
end
logtimings1D = log10(timings1D);
lognn1D=log10(nn1D);
figure(1)
clf()
hold on
plot(lognn1D,logtimings1D(1,:),'-k','LineWidth',2)
plot(lognn1D,logtimings1D(2,:),'--k','LineWidth',2)
plot(lognn1D,logtimings1D(3,:),'-.k','LineWidth',2)
plot(lognn1D,logtimings1D(4,:),'-','Color',[0.6 0.6 0.6],'LineWidth',2)
plot(lognn1D,logtimings1D(5,:),'--','Color',[0.6 0.6 0.6],'LineWidth',2)
xlabel('Number of elements (log10[-])')
ylabel('Timing of each method (log10[s])')
legend('M1R','M1C','M2R','M2C','M3','Location','NW')
title({'Various methods of pre-allocation in 1D','nr. of elements vs timing'})
hold off
Note
The lines containing c(randi(n,1))=1
; do not do anything except assigning the value one to a random element in the pre-allocated array so that the array is used to challenge the JIT compiler a bit. These lines are not affecting the pre-allocation measurement significantly, i.e. they are not measurable and do not effect the test.
How about letting Matlab take care of the allocation for you?
clear a;
for i=N:-1:1
a(i) = cos(i);
end
Then Matlab can allocate and fill the array with whatever it thinks will be optimal (probably zero). You don't have the debugging advantage of NaNs
, though.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With