I have a particular problem where I want to search for many substrings in a list of many strings. The following is the gist of what I am trying to do:
listStrings = [ACDE, CDDE, BPLL, ... ]
listSubstrings = [ACD, BPI, KLJ, ...]
The above entries are just examples. len(listStrings) is ~ 60,000, len(listSubstrings) is ~50,000-300,000, and len(listStrings[i]) is anywhere from 10 to 30,000.
My current Python attempt is:
for i in listSubstrings:
for j in listStrings:
if i in j:
w.write(i+j)
Or something along these lines. While this works for my task, it's horribly slow, using one core and taking on the order of 40 minutes to complete the task. Is there a way to speed this up?
I don't believe that I can make a dict out of listStrings:listSubstrings because there is the possibility of duplicate entries which need to be stored on both ends (although I may try this if I can find a way to append a unique tag to each one, since dicts are so much faster). Similarly, I don't think I can pre-compute possible substrings. I don't even know if searching dict keys is faster than searching a list (since dict.get()
is going to give the specific input and not look for sub-inputs). Is searching lists in memory just that slow relatively speaking?
For the sort of thing you're trying (searching for a fixed set of a whole bunch of strings in a whole bunch of other strings), parallelizing and minor tweaks won't help much. You need algorithmic improvements.
For a start, I'd suggest using the Aho-Corasick string matching algorithm. Basically, in exchange for some precompute work to build a matcher object from your set of fixed strings, you can scan another string for all of those fixed strings at once, in a single pass.
So instead of scanning 60K strings 50K+ times each (three BILLION scans?!?), you can scan them each once with only slightly higher cost than a normal single scan, and get all the hits.
Best part is, you're not writing it yourself. PyPI (the Python package index) already has the pyahocorasick
package written for you. So try it out.
Example of use:
import ahocorasick
listStrings = [ACDE, CDDE, BPLL, ...]
listSubstrings = [ACD, BPI, KLJ, ...]
auto = ahocorasick.Automaton()
for substr in listSubstrings:
auto.add_word(substr, substr)
auto.make_automaton()
...
for astr in listStrings:
for end_ind, found in auto.iter(astr):
w.write(found+astr)
This will write
multiple times if a substring ("needle") is found in string being searched ("haystack") more than once. You could change the loop to make it only write
on the first hit for a given needle in a given haystack by using a set
to dedup:
for astr in listStrings:
seen = set()
for end_ind, found in auto.iter(astr):
if found not in seen:
seen.add(found)
w.write(found+astr)
You can further tweak this to output the needles for a given haystack in the same order they appeared in listSubstrings
(and uniquifying while you're at it) by storing the index of the words as or with their values so you can sort hits (presumably small numbers, so sort overhead is trivial):
from future_builtins import map # Only on Py2, for more efficient generator based map
from itertools import groupby
from operator import itemgetter
auto = ahocorasick.Automaton()
for i, substr in enumerate(listSubstrings):
# Store index and substr so we can recover original ordering
auto.add_word(substr, (i, substr))
auto.make_automaton()
...
for astr in listStrings:
# Gets all hits, sorting by the index in listSubstrings, so we output hits
# in the same order we theoretically searched for them
allfound = sorted(map(itemgetter(1), auto.iter(astr)))
# Using groupby dedups already sorted inputs cheaply; the map throws away
# the index since we don't need it
for found, _ in groupby(map(itemgetter(1), allfound)):
w.write(found+astr)
For performance comparisons, I used a variant on mgc's answer that is more likely to contain matches, as well as enlarging the haystacks. First, setup code:
>>> from random import choice, randint
>>> from string import ascii_uppercase as uppercase
>>> # 5000 haystacks, each 1000-5000 characters long
>>> listStrings = [''.join([choice(uppercase) for i in range(randint(1000, 5000))]) for j in range(5000)]
>>> # ~1000 needles (might be slightly less for dups), each 3-12 characters long
>>> listSubstrings = tuple({''.join([choice(uppercase) for i in range(randint(3, 12))]) for j in range(1000)})
>>> auto = ahocorasick.Automaton()
>>> for needle in listSubstrings:
... auto.add_word(needle, needle)
...
>>> auto.make_automaton()
And now to actually test it (using ipython
%timeit
magic for microbenchmarks):
>>> sum(needle in haystack for haystack in listStrings for needle in listSubstrings)
80279 # Will differ depending on random seed
>>> sum(len(set(map(itemgetter(1), auto.iter(haystack)))) for haystack in listStrings)
80279 # Same behavior after uniquifying results
>>> %timeit -r5 sum(needle in haystack for haystack in listStrings for needle in listSubstrings)
1 loops, best of 5: 9.79 s per loop
>>> %timeit -r5 sum(len(set(map(itemgetter(1), auto.iter(haystack)))) for haystack in listStrings)
1 loops, best of 5: 460 ms per loop
So for checking for ~1000 smallish strings in each of 5000 moderate size strings, pyahocorasick
beats individual membership tests by a factor of ~21x on my machine. It scales well as the size of listSubstrings
increases too; when I initialized it the same way, but with 10,000 smallish strings instead of 1000, the total time required increased from ~460 ms to ~852 ms, a 1.85x time multiplier to perform 10x as many logical searches.
For the record, the time to build the automatons is trivial in this sort of context. You pay it once up front not once per haystack, and testing shows the ~1000 string automaton took ~1.4 ms to build and occupied ~277 KB of memory (above and beyond the strings themselves); the ~10000 string automaton took ~21 ms to build, and occupied ~2.45 MB of memory.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With