I am trying to understand the X and Y Fast Trie data structures and it's not clear why that structures are not used in large database since their asymptotic complexity is less than Log(N). In cases where we have a database of Terabytes, is not better use a Y Fast Trie than for example a B-tree?
It has a wide variety of applications in data compression, computational biology, longest prefix matching algorithm used for routing tables for IP addresses, implementation of the dictionary, pattern searching, storing/querying XML documents, etc.
Tries have many advantages over their counterpart, the hash table. They are used in many string search applications such as auto-complete, text search, and prefix matching. Radix trees, a kind of trie, are often used in IP routing.
Although the hash table has a relatively faster lookup speed, it only supports the exact match of the whole string. The trie solution is more flexible to support more applications, such as auto-complete. Also, we can easily print all the words in the dictionary in alphabetic order with a trie.
In computer science, a trie, also called digital tree or prefix tree, is a type of k-ary search tree, a tree data structure used for locating specific keys from within a set. These keys are most often strings, with links between nodes defined not by the entire key, but by individual characters.
There are a few reasons that X-fast or Y-fast tries might not be useful in practice. Here are a few:
X-fast tries internally require several linked structures, including a bitwise trie and a doubly-linked list of elements. These don't perform well in database environments where elements are stored on disks and following a pointer can require a disk seek. (For similar reasons, databases often use B-trees over binary search trees). Additionally, they require the use of balanced binary search trees augmented with information to perform a split or join, which adds in extra space and introduces even more pointers to follow.
X-fast tries internally require the use of hash tables with worst-case O(1) lookups. Hash tables with these requirements usually require a variety of hash functions to be applied to look up an element and (generally speaking) don't have the best locality compared to, say, a linear-probing hash table, so lookups are a bit slower.
Because of the hash tables in X-fast tries and the use of splitting and joining of BSTs in Y-fast tries, these two data structures are only amortized efficient rather than worst-case efficient. In some cases, this is unacceptable - it would be bad if, periodically, a database query ends up taking 100x or 1000x normal time even if on average everything works out quite well.
The use of hash tables in X-fast tries and Y-fast tries means that there is an element of randomness involved in the runtimes of the data structures. On expectation they're efficient, but it's possible that due to bad luck, the runtimes of the data structures might be quite high. Specifically, the cost of doing a rehash in an internal hash table or doing a split or join on a tree can be quite high. In a database implementation, reliability is important, so this randomness might hurt.
Due to all the reasons outlined above, the constant factors buried in the runtimes of X-fast and Y-fast tries are quite large. In the long run, they should end up being faster than other data structures, but "the long run" might require inputs that are vastly larger than the sorts of data sets that could feasibly fit into a database.
Hope this helps!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With