Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the fastest substring search algorithm?

People also ask

Which algorithm is used to find the string pattern in text O n time?

The Boyer–Moore string-search algorithm has been the standard benchmark for the practical string-search literature.

What is fast pattern matching algorithm?

Pattern-matching algorithms scan the text with the help of a window, whose size is equal to the length of the pattern. The first step is to align the left ends of the window and the text and then compare the corresponding characters of the window and the pattern; this procedure is known as attempt.

What is Z algorithm?

Z algorithm is used to find the occurrence of a pattern in a string in linear time. Suppose if the length of the string is n and the size of the pattern to be searched is m, the time taken to solve will be of the order O(m+n). The z-algorithm uses a Z array to find the occurrence of a pattern.

What is the algorithm used for pattern searching?

We use certain algorithms called pattern recognition to do the searching process. The complexity of pattern searching is O(m(n-m+1)). The algorithms are also known as String Searching Algorithms. These are very useful while we perform the search operation in the database.


Build up a test library of likely needles and haystacks. Profile the tests on several search algorithms, including brute force. Pick the one that performs best with your data.

Boyer-Moore uses a bad character table with a good suffix table.

Boyer-Moore-Horspool uses a bad character table.

Knuth-Morris-Pratt uses a partial match table.

Rabin-Karp uses running hashes.

They all trade overhead for reduced comparisons to a different degree, so the real world performance will depend on the average lengths of both the needle and haystack. The more initial overhead, the better with longer inputs. With very short needles, brute force may win.

Edit:

A different algorithm might be best for finding base pairs, english phrases, or single words. If there were one best algorithm for all inputs, it would have been publicized.

Think about the following little table. Each question mark might have a different best search algorithm.

                 short needle     long needle
short haystack         ?               ?
long haystack          ?               ?

This should really be a graph, with a range of shorter to longer inputs on each axis. If you plotted each algorithm on such a graph, each would have a different signature. Some algorithms suffer with a lot of repetition in the pattern, which might affect uses like searching for genes. Some other factors that affect overall performance are searching for the same pattern more than once and searching for different patterns at the same time.

If I needed a sample set, I think I would scrape a site like google or wikipedia, then strip the html from all the result pages. For a search site, type in a word then use one of the suggested search phrases. Choose a few different languages, if applicable. Using web pages, all the texts would be short to medium, so merge enough pages to get longer texts. You can also find public domain books, legal records, and other large bodies of text. Or just generate random content by picking words from a dictionary. But the point of profiling is to test against the type of content you will be searching, so use real world samples if possible.

I left short and long vague. For the needle, I think of short as under 8 characters, medium as under 64 characters, and long as under 1k. For the haystack, I think of short as under 2^10, medium as under a 2^20, and long as up to a 2^30 characters.


Published in 2011, I believe it may very well be the "Simple Real-Time Constant-Space String Matching" algorithm by Dany Breslauer, Roberto Grossi, and Filippo Mignosi.

Update:

In 2014 the authors published this improvement: Towards optimal packed string matching.


The http://www-igm.univ-mlv.fr/~lecroq/string/index.html link you point to is an excellent source and summary of some of the best known and researched string matching algorithms.

Solutions to most search problems involve trade offs with respect to pre-processing overhead, time and space requirements. No single algorithm will be optimal or practical in all cases.

If you objective is to design a specific algorithm for string searching, then ignore the rest of what I have to say, If you want to develop a generalized string searching service routine then try the following:

Spend some time reviewing the specific strengths and weaknesses of the algorithms you have already referenced. Conduct the review with the objective of finding a set of algorithms that cover the range and scope of string searches you are interested in. Then, build a front end search selector based on a classifier function to target the best algorithm for the given inputs. This way you may employ the most efficient algorithm to do the job. This is particularly effective when an algorithm is very good for certain searches but degrades poorly. For example, brute force is probably the best for needles of length 1 but quickly degrades as needle length increases, whereupon the sustik-moore algoritim may become more efficient (over small alphabets), then for longer needles and larger alphabets, the KMP or Boyer-Moore algorithms may be better. These are just examples to illustrate a possible strategy.

The multiple algorithm approach not a new idea. I believe it has been employed by a few commercial Sort/Search packages (e.g. SYNCSORT commonly used on mainframes implements several sort algorithms and uses heuristics to choose the "best" one for the given inputs)

Each search algorithm comes in several variations that can make significant differences to its performance, as, for example, this paper illustrates.

Benchmark your service to categorize the areas where additional search strategies are needed or to more effectively tune your selector function. This approach is not quick or easy but if done well can produce very good results.


I was surprised to see our tech report cited in this discussion; I am one of the authors of the algorithm that was named Sustik-Moore above. (We did not use that term in our paper.)

I wanted here to emphasize that for me the most interesting feature of the algorithm is that it is quite simple to prove that each letter is examined at most once. For earlier Boyer-Moore versions they proved that each letter is examined at most 3 and later 2 times at most, and those proofs were more involved (see cites in paper). Therefore I also see a didactical value in presenting/studying this variant.

In the paper we also describe further variations that are geared toward efficiency while relaxing the theoretical guarantees. It is a short paper and the material should be understandable to an average high school graduate in my opinion.

Our main goal was to bring this version to the attention of others who can further improve on it. String searching has so many variations and we alone cannot possibly think of all where this idea could bring benefits. (Fixed text and changing pattern, fixed pattern different text, preprocessing possible/not possible, parallel execution, finding matching subsets in large texts, allow errors, near matches etc., etc.)


The fastest substring search algorithm is going to depend on the context:

  1. the alphabet size (e.g. DNA vs English)
  2. the needle length

The 2010 paper "The Exact String Matching Problem: a Comprehensive Experimental Evaluation" gives tables with runtimes for 51 algorithms (with different alphabet sizes and needle lengths), so you can pick the best algorithm for your context.

All of those algorithms have C implementations, as well as a test suite, here:

http://www.dmi.unict.it/~faro/smart/algorithms.php