I want to use the Levenshtein algorithm for the following task: if a user on my website searches for some value (he enters characters in a input), I want to instantly check for suggestions with AJAX, like Google Instant does.
I have the impression that the Levenshtein algorithm is way too slow for such a task. To check its behaviour, I first implemented it in Java, printing out the two String
s in every recursive call of the method.
public class Levenshtein {
public static void main(String[] arg){
String a = "Hallo Zusammen";
String b = "jfdss Zusammen";
int res = levenshtein(a, b);
System.out.println(res);
}
public static int levenshtein(String s, String t){
int len_s = s.length();
int len_t = t.length();
int cost = 0;
System.out.println("s: " + s + ", t: " + t);
if(len_s>0 && len_t>0){
if(s.charAt(0) != t.charAt(0)) cost = 1;
}
if(len_s == 0){
return len_t;
}else{
if(len_t == 0){
return len_s;
}else{
String news = s.substring(0, s.length()-1);
String newt = t.substring(0, t.length()-1);
return min(levenshtein(news, t) + 1,
levenshtein(s, newt) + 1,
levenshtein(news, newt) + cost);
}
}
}
public static int min(int a, int b, int c) {
return Math.min(Math.min(a, b), c);
}
}
However, here are some points:
if(len_s>0 && len_t>0)
was added by me, because I was getting a StringIndexOutOfBoundsException
with above test values.Are there optimizations that can be made on the algorithm to make it work for me, or should I use a completely different one to accomplish the desired task?
The Levenshtein distance is a string metric for measuring difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (i.e. insertions, deletions or substitutions) required to change one word into the other.
Different definitions of an edit distance use different sets of string operations. Levenshtein distance operations are the removal, insertion, or substitution of a character in the string. Being the most common metric, the term Levenshtein distance is often used interchangeably with edit distance.
While the original motivation was to measure distance between human misspellings to improve applications such as spell checkers, Damerau–Levenshtein distance has also seen uses in biology to measure the variation between protein sequences.
Recursive implementation of Levenshteins distance has exponential complexity.
I'd suggest you to use memoization technique and implement Levenshtein distance without recursion, and reduce complexity to O(N^2)
(needs O(N^2)
memory)
public static int levenshteinDistance( String s1, String s2 ) {
return dist( s1.toCharArray(), s2.toCharArray() );
}
public static int dist( char[] s1, char[] s2 ) {
// distance matrix - to memoize distances between substrings
// needed to avoid recursion
int[][] d = new int[ s1.length + 1 ][ s2.length + 1 ];
// d[i][j] - would contain distance between such substrings:
// s1.subString(0, i) and s2.subString(0, j)
for( int i = 0; i < s1.length + 1; i++ ) {
d[ i ][ 0 ] = i;
}
for(int j = 0; j < s2.length + 1; j++) {
d[ 0 ][ j ] = j;
}
for( int i = 1; i < s1.length + 1; i++ ) {
for( int j = 1; j < s2.length + 1; j++ ) {
int d1 = d[ i - 1 ][ j ] + 1;
int d2 = d[ i ][ j - 1 ] + 1;
int d3 = d[ i - 1 ][ j - 1 ];
if ( s1[ i - 1 ] != s2[ j - 1 ] ) {
d3 += 1;
}
d[ i ][ j ] = Math.min( Math.min( d1, d2 ), d3 );
}
}
return d[ s1.length ][ s2.length ];
}
Or, even better - you may notice, that for each cell in distance matrix - you're need only information about previous line, so you can reduce memory needs to O(N)
:
public static int dist( char[] s1, char[] s2 ) {
// memoize only previous line of distance matrix
int[] prev = new int[ s2.length + 1 ];
for( int j = 0; j < s2.length + 1; j++ ) {
prev[ j ] = j;
}
for( int i = 1; i < s1.length + 1; i++ ) {
// calculate current line of distance matrix
int[] curr = new int[ s2.length + 1 ];
curr[0] = i;
for( int j = 1; j < s2.length + 1; j++ ) {
int d1 = prev[ j ] + 1;
int d2 = curr[ j - 1 ] + 1;
int d3 = prev[ j - 1 ];
if ( s1[ i - 1 ] != s2[ j - 1 ] ) {
d3 += 1;
}
curr[ j ] = Math.min( Math.min( d1, d2 ), d3 );
}
// define current line of distance matrix as previous
prev = curr;
}
return prev[ s2.length ];
}
Levenshtein's distance is perferred only if you need to find exact matches.
But what if your keyword would be apple
and user typed green apples
? Levenshteins distance between query and keyword would be large (7 points). And Levensteins distance between apple
and bcdfghk
(dumb string) would be 7 points too!
I'd suggest you to use full-text search engine (e.g. Lucene). The trick is - that you have to use n-gram model to represent each keyword.
In few words:
1) you have to represent each keyword as document, which contains n-grams: apple -> [ap, pp, pl, le]
.
2) after transforming each keyword to set of n-grams - you have to index each keyword-document by n-gram in your search engine. You'll have to create index like this:
...
ap -> apple, map, happy ...
pp -> apple ...
pl -> apple, place ...
...
3) So you have n-gram index. When you're get query - you have to split it into n-grams. Aftre this - you'll have set of users query n-grams. And all you need - is to match most similar documents from your search engine. In draft approach it would be enough.
4) For better suggest - you may rank results of search-engine by Levenshtein distance.
P.S. I'd suggest you to look through the book "Introduction to information retrieval".
You can use Apache Commons Lang3's StringUtils.getLevenshteinDistance()
:
Find the Levenshtein distance between two Strings.
This is the number of changes needed to change one String into another, where each change is a single character modification (deletion, insertion or substitution).
The previous implementation of the Levenshtein distance algorithm was from http://www.merriampark.com/ld.htm
Chas Emerick has written an implementation in Java, which avoids an OutOfMemoryError which can occur when my Java implementation is used with very large strings.
This implementation of the Levenshtein distance algorithm is from http://www.merriampark.com/ldjava.htm
StringUtils.getLevenshteinDistance(null, *) = IllegalArgumentException StringUtils.getLevenshteinDistance(*, null) = IllegalArgumentException StringUtils.getLevenshteinDistance("","") = 0 StringUtils.getLevenshteinDistance("","a") = 1 StringUtils.getLevenshteinDistance("aaapppp", "") = 7 StringUtils.getLevenshteinDistance("frog", "fog") = 1 StringUtils.getLevenshteinDistance("fly", "ant") = 3 StringUtils.getLevenshteinDistance("elephant", "hippo") = 7 StringUtils.getLevenshteinDistance("hippo", "elephant") = 7 StringUtils.getLevenshteinDistance("hippo", "zzzzzzzz") = 8 StringUtils.getLevenshteinDistance("hello", "hallo") = 1
There is an open-source library, java-util (https://github.com/jdereg/java-util) that has a StringUtilities.levenshteinDistance(string1, string2) API that is implemented in O(N^2) complexity and uses memory only proportional to O(N) [as discussed above].
This library also includes damerauLevenshteinDisance() as well. Damerau-Levenshtein counts the character transposition (swap) as one edit, where as proper levenshtein counts it as two edits. The downside to Damerau-Levenshtein is that it is does not have triangular equality like the original levenshtein.
Great depiction of triangular equality:
http://richardminerich.com/2012/09/levenshtein-distance-and-the-triangle-inequality/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With