Running my app through callgrind revealed that this line dwarfed everything else by a factor of about 10,000. I'm probably going to redesign around it, but it got me wondering; Is there a better way to do it?
Here's what I'm doing at the moment:
int i = 1;
while
(
(
(*(buffer++) == 0xffffffff && ++i) ||
(i = 1)
)
&&
i < desiredLength + 1
&&
buffer < bufferEnd
);
It's looking for the offset of the first chunk of desiredLength 0xffffffff values in a 32 bit unsigned int array.
It's significantly faster than any implementations I could come up with involving an inner loop. But it's still too damn slow.
I'd go for the search_n
suggestion, too, because I'm pretty sure it does this properly. It's actually pretty easy, and it can be sped up basically by a factor of desired_length. unless the target values are really dense in the array.
Here's the idea: if you have K
consecutive instances of a value starting at position I
, then it must be the case that position I + K - 1
contains that value. So you check that first; if it doesn't, then the earliest position which might contain the K consecutive values is I + K
, so you can restart the algorithm there.
If, on the other hand, you find the value at I + K - 1
, then you scan backwards until you either reach I
(in which case you succeeded), or you reach some position J - 1
which doesn't contain the target value. In the latter case, you know there are target values from J
to I + K - 1
, so you now check J + K - 1
. If that works, you only have to scan backwards to I + K
. If it doesn't work, you restart the algorithm at J + K
.
Most of the time, you'll only look at every K'th
position in the vector. For large K
, that's a big win.
You tagged c++ so I assume you have STL algorithms available:
std::search_n(buffer, bufferEnd, desiredLength, 0xffffffff);
Try to use memcmp
from C standard library. Modern compilers shall have very optimized implementations of memxxx
functions making the most speed out of modern CPUs.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With