Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How is a bitmapped vector trie faster than a plain vector?

Tags:

It's supposedly faster than a vector, but I don't really understand how locality of reference is supposed to help this (since a vector is by definition the most locally packed data possible -- every element is packed next to the succeeding element, with no extra space between).

Is the benchmark assuming a specific usage pattern or something similar?

How this is possible?

like image 860
user541686 Avatar asked Jan 13 '12 01:01

user541686


1 Answers

bitmapped vector tries aren't strictly faster than normal vectors, at least not at everything. It depends on what operation you are considering.

Conventional vectors are faster, for example, at accessing a data element at a specific index. It's hard to beat a straight indexed array lookup. And from a cache locality perspective, big arrays are pretty good if all you are doing is looping over them sequentially.

However a bitmapped vector trie will be much faster for other operations (thanks to structural sharing) - for example creating a new copy with a single changed element without affecting the original data structure is O(log32 n) vs. O(n) for a traditional vector. That's a huge win.

Here's an excellent video well worth watching on the topic, which includes a lot of the motivation of why you might want these kind of structures in your language: Persistent Data Structures and Managed References (talk by Rich Hickey).

like image 192
mikera Avatar answered Sep 25 '22 01:09

mikera