I am writing a program in MATLAB (must use MATLAB and can't really use a MEX) to filter very large amounts of data.
One of the filters I need to implement requires me to compare a timestamp vector versus a list of known "bad" times around which other timestamps cannot occur.
A typical timestamp vector has about 2,000,000 entries, and I have a list of about 300,000 "bad times."
Here's an working example, if TIME=[1, 2.3, 5.5, 9.1, 10];, and BAD_TIMES=[5.2, 9.3];, and we have a tolerance tolerance=0.25;, then all timestamps in TIME between 4.95 and 5.45 and 9.05 and 9.55 must be erased. This means that the cleaned vector TIME_CLEAN should be equal to TIME_CLEAN=[1, 2.3, 5.5, 10];.
This problem is straightforward to solve, and I have solved it in about 4 or 5 different ways. However, for a 1,000,000 timestamp job, this problem can easily take an hour to compute.
I am looking to solve this type of problem in under 2 minutes on a typical Core-i7 workstation for this filter to be viable with this many time entries.
I have included a working version of this code. I understand code vectorization and functions such as bsxfun() can help, but the improvement is marginal relative to the type of efficiency I require for this filter.
Are there any very clever ways to solve this problem in a very efficient fashion? Any help would be greatly appreciated.
P.S. The code below is complete; it generates all data needed to setup the problem and solves it (although VERY slowly!). Change the NO_OF_TIMESTAMPS variable to something larger (such as 1,000,000) to watch it crawl!
clear all %% CLEAR WORKSPACE
close all %% CLOSE FIGURES
clc %% CLEAR COMMAND WINDOW
NO_OF_TIMESTAMPS=10000; %% NUMBER OF TIMESTAMPS IN ORIGINAL DATA
TOLERANCE=2; %% TOLERANCE AROUND TIMESTAMP
A=sort(randi(NO_OF_TIMESTAMPS/10,NO_OF_TIMESTAMPS,1)); %% GENERATE ARTIFICIAL TIMESTAMPS
B=unique(sort(round(randi([NO_OF_TIMESTAMPS/2,NO_OF_TIMESTAMPS*5],[NO_OF_TIMESTAMPS/10,1])/10))); %% GENERATE ARTIFICIAL LIST OF BAD TIMESTAMPS
B_LB=B-TOLERANCE; %% CREATE A LIST OF LOWERBOUND BAD TIMESTAMPS
B_UB=B+TOLERANCE; %% CREATE A LIST OF UPPERBPUND BAD TIMESTAMPS
B_RANGE=[B_LB B_UB]; %% AUGMENTED MATRIX COMPOSED OF VECTORS B_LB and B_UB
A_ROWS=size(A,1); %% SIZE OF A;
B_ROWS=size(B,1); %% SIZE OF B;
tic; %% START TIMER
A_TO_CLEAN=ones(A_ROWS,1); %% BOOLEAN VECTOR TO BE USED IN FILTERING
for ii=1:A_ROWS
for jj=1:B_ROWS
if A(ii)>=B_RANGE(jj,1) && A(ii)<=B_RANGE(jj,2) %% CHECK EACH MEMBER OF A VERSUS EACH MEMBER OF B_RANGE
A_TO_CLEAN(ii)=0; %% SET INDEX VECTOR A_TO_CLEAN = 0 SO THAT WE CAN DELETE LATER
break; %% A(ii) CAN ONLY BE ERASED ONCE, SO BREAK jj LOOP AND GO TO NEXT ii
end
end
end
CLEAN=A(~~A_TO_CLEAN); %% DELETE A VIA LOGICAL INDEXING
toc; %% END TIMER
clearvars -except A B_RANGE CLEAN %% ONLY SHOW RELEVANT VARIABLES
The trick to making this efficient to to first sort both vectors. Then create a simple loop through one of the vectors, while maintaining an index into the second vector describing the closest element. That is, you will have something like
for ix1 = 1:length(timestamps)
while (badTimes(ix2) < timestamps(ix1)
ix2 = ix2+1;
end
%check timestamp(ix1) against badTimes(ix2), and maybe badTimes(ix2 + 1) and badTimes(ix2 - 1)
end
Sorting is relatively efficient, especially using the built-ins. And now you only need a single loop.
This now bears a resemblance portions of the a merge sort algorithm.
This takes 0.025s for 1e6 'timesteps' on my computer. The method goes linearly through A, updating the index as it steps through the B_RANGE. Special care is needed for 'end of array' cases.
BR=B_RANGE';
C=logical(ones(size(A)));
j=1;
i=1;
tic;
while i<=A_ROWS && j<=B_ROWS
if A(i)==99
i=1;
end
% find start of bad signal
while A(i)<BR(1,j) && i<A_ROWS
i=i+1;
end
% finish at the end of A
if i==A_ROWS
break;
end
ii=i;
% find end of bad signal
while A(ii)<=BR(2,j) && ii<A_ROWS
ii=ii+1;
end
% special case for end of array
if A(ii)==A(ii-1)
ii=ii+1;
end
% mark bad signal entries
C(i:ii-1)=false;
i=ii;
j=j+1;
end
AM=A(C);
toc
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With