I am trying to use the BK-tree data structure in python to store a corpus with ~10 billion entries (1e10
) in order to implement a fast fuzzy search engine.
Once I add over ~10 million (1e7
) values to a single BK-tree, I start to see a significant degradation in the performance of querying.
I was thinking to store the corpus into a forest of a thousand BK-trees and to query them in parallel.
Does this idea sound feasible? Should I create and query 1,000 BK-trees simultaneously? What else can I do in order to use BK-tree for this corpus.
I use pybktree.py and my queries are intended to find all entries within an edit distance d
.
Is there some architecture or database which will allow me to store those trees?
Note: I don’t run out of memory, rather the tree begins to be inefficient (presumably each node has too many children).
Since you are mentioning your usage of FuzzyWuzzy as distance metric I will concentrate on efficient ways to implement the fuzz.ratio
algorithm used by FuzzyWuzzy. FuzzyWuzzy provides the following two implementations for fuzz.ratio
:
The implementation of python-Levenshtein
uses the following implementation:
I am the author of the library RapidFuzz which implements the algorithms used by FuzzyWuzzy in a more performant way. RapidFuzz uses the following interface for fuzz.ratio
:
def ratio(s1, s2, processor = None, score_cutoff = 0)
The additional score_cutoff
parameter can be used to provide a score threshold as a float between 0 and 100. For ratio < score_cutoff 0 is returned instead. This can be used by the implementation to use more a more optimized implementation in some cases. In the following I will describe the optimizations used by RapidFuzz depending on the input parameters. In the following max distance
refers to the maximum distance that is possible without getting a ratio below the score threshold.
The similarity can be calculated using a direct comparison,
since no difference between the strings is allowed. The time complexity of
this algorithm is O(N)
.
The similarity can be calculated using a direct comparisons as well, since a substitution would cause a edit distance higher than max distance. The time complexity of this algorithm is O(N)
.
A common prefix/suffix of the two compared strings does not affect the Levenshtein distance, so the affix is removed before calculating the similarity. This step is performed for any of the following algorithms.
The mbleven algorithm is used. This algorithm
checks all possible edit operations that are possible under
the threshold max distance
. A description of the original algorithm can be found here. I changed this algorithm to support the weigth of 2 for substitutions. As a difference to the normal Levenshtein distance this algorithm can even be used up to a threshold of 4 here, since the higher weight of substitutions decreases the amount of possible edit operations. The time complexity of this algorithm is O(N)
.
The BitPAl algorithm is used, which calculates the Levenshtein distance in
parallel. The algorithm is described here and is extended with support
for UTF32 in this implementation. The time complexity of this algorithm is O(N)
.
The Levenshtein distance is calculated using
Wagner-Fischer with Ukkonens optimization. The time complexity of this algorithm is O(N * M)
.
This could be replaced with a blockwise implementation of BitPal in the future.
FuzzyWuzzy provides multiple processors like process.extractOne
that are used to calculate the similarity between a query and multiple choices. Implementing this in C++ as well allows two more important optimizations:
when a scorer is used that is implemented in C++ as well we can directly call the C++ implementation of the scorer and do not have to go back and forth between Python and C++, which provides a massive speedup
We can preprocess the query depending on the scorer that is used. As an example when fuzz.ratio
is used as scorer it only has to store the query into the 64bit blocks used by BitPal once, which saves around 50% of the runtime when calculating the Levenshtein distance
So far only extractOne
and extract_iter
are implemented in Python, while extract
which you would use is still implemented in Python and uses extract_iter
. So it can already use the 2. optimization, but still has to switch a lot between Python and C++ which is not optimal (This will probably be added in v1.0.0 as well).
I performed benchmarks for extractOne
and the individual scorers during the development that shows the performance difference between RapidFuzz and FuzzyWuzzy. Keep in mind that the performance for your case (all strings length 20) is probably not as good, since many of the strings in the dataset used are very small.
The source of the reproducible-science DATA :
words.txt
( dataset with 99171 words )The hardware the graphed benchmarks were run on (specification) :
The code for this benchmark can be found here
For this benchmark the code of process.extractOne
is slightly changed to remove the score_cutoff
parameter. This is done because in extractOne
the score_cutoff
is increased whenever a better match is found (and it exits once it finds a perfect match). In the future it would make more sense to benchmark process.extract
which does not has this behavior (the benchmark is performed using process.extractOne
, since process.extract
is not fully implemented in C++ yet). The benchmark code can be found here
This shows that when possible the scorers should not be used directly but through the processors, that can perform a lot more optimizations.
As an Alternative you could use a C++ implementation. The library RapidFuzz is available for C++ here. The implementation in C++ is relatively simple as well
// function to load words into vector
std::vector<std::string> choices = load("words.txt");
std::string query = choices[0];
std::vector<double> results;
results.reserve(choices.size());
rapidfuzz::fuzz::CachedRatio<decltype(query)> scorer(query);
for (const auto& choice : choices)
{
results.push_back(scorer.ratio(choice));
}
or in parallel using open mp
// function to load words into vector
std::vector<std::string> choices = load("words.txt");
std::string query = choices[0];
std::vector<double> results;
results.reserve(choices.size());
rapidfuzz::fuzz::CachedRatio<decltype(query)> scorer(query);
#pragma omp parallel for
for (const auto& choice : choices)
{
results.push_back(scorer.ratio(choice));
}
On my machine (see Benchmark above) this evaluates 43 million words/sec and 123 million words/sec in the parallel version. This is around 1.5 times as fast as the Python implementation (due to conversions between Python and C++ Types). However the main advantage of the C++ version is that you are relatively free to combine algorithms whichever way you want, while in the Python version your forced to use the process
functions that are implemented in C++ to achieve good performance.
BK-trees
Kudos to Ben Hoyt and his link to the issue which I will draw from. That being said, the first observation from the mentioned issue is that the BK tree isn't exactly logarithmic. From what you told us your usual d is ~6, which is 3/10 of your string length. Unfortunately, that means that if we look at the tables from the issue you will get the complexity of somewhere between O(N^0.8) to O(N). In the optimistic case of the
exponent being 0.8(it will likely be slightly worse) you get an improvement factor of ~100 on your 10B entries. So if you have a reasonably fast implementation of BK-trees it can still be worth it to use them or use them as a basis for a further optimization.
The downside of this is that even if you use 1000 trees in parallel, you will only get the improvement from the parallelization as the perfomance of the trees depends on the d rather than on the amount of the nodes within the tree. However even if you run all the 1000 trees at once with a massive machine, we are at the ~10M nodes/tree which you reported as slow. Still, computation wise, this seems doable.
A brute force approach
If you don't mind paying a little I would look into something like Google cloud big query if that doesn't clash with some kind of data confidentiality. They will brute force the solution for you - for a fee. The current rate is $5/TB of a query. Your dataset is ~10B rows * 20chars. Taking one byte per char, one query would take 200GB so ~1$ per query if you went the lazy way.
However, since the charge is per byte of a data in a column and not per complexity of a question, you could improve on this by storing your strings as bits - 2bits per a letter, this would save you 75% of the expenses.
Improving further, you can write your query in such a way that it will ask for a dozen strings at once. You might need to be a bit careful to use a batch of similar strings for the purpose of the query to avoid clogging of the result with too many one-offs though.
Brute forcing of the BK-trees
Since if you go with the route above, you will have to pay depending on the volume, the ~100-fold decrease in the computations needed becomes ~100-fold decrease in price which might be useful, especially if you have a lot of queries to run.
However you would need to figure out a way to store this tree in a several layers of databases to query recursively as the Bigquery pricing depends on the volume of the data in the queried table.
Building a smart batch engine for recursive processing of the queries to minimize the costs could be fun optimization excercise.
A choice of language
One more thing. While I think that Python is a good language for fast prototyping, analysis and thinking about code in general you are past that stage. You are currently looking for a way to do a specific, well defined and well thought operation as fast as possible. Python is not a great language for this as this example shows. While I used all the tricks I could think of in Python, the Java and C solutions were still several times faster. (Not to mention the rust one that beat us all - but he beat us by algorithm as well so it's hard to compare.) So if you go from python to a faster language, you might gain another factor or ten or maybe even more of a performance gain. This could be another fun optimization exercise.
Note: I am being rather conservative with the estimate as the fuzzywuzzy already offers to use a C library in the background so I'm not too sure about how much of the work still depends on the python. My experience in similar cases is that the performance gain can be factor of 100 from pure python(or worse, pure R) to a compiled language.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With