Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Compare 2 string

Tags:

c#

.net

asp.net

I have the following 2 strings:

String A: Manchester United
String B: Manchester Utd

Both strings means the same, but contains different values.

How can I compare these string to have a "matching score" like, in the case, the first word is similar, "Manchester" and the second words contain similar letters, but not in the right place.

Is there any simple algorithm that returns the "matching score" after I supply 2 strings?

like image 632
m0fo Avatar asked May 08 '12 09:05

m0fo


2 Answers

You could calculate the Levenshtein distance between the two strings and if it is smaller than some value (that you must define) you may consider them to be pretty close.

like image 98
Darin Dimitrov Avatar answered Oct 02 '22 16:10

Darin Dimitrov


I've needed to do something like this and used Levenshtein distance.

I used it for a SQL Server UDF which is being used in queries with more than a million of rows (and texts of up to 6 or 7 words).

I found that the algorithm runs faster and the "similarity index" is more precise if you compare each word separately. I.e. you split each input string in words, and compare each word of one input string to each word of the other input string.

Remember that Levenshtein gives the difference, and you have to convert it to a "similarity index". I used something like distance divided by the length of the longest word (but with some variations)

First rule: order and number of words

You must also consider:

  • if there must be the same number of words in both inputs, or it can change
  • and if the order must be the same on both inputs, or it can change.

Depending on this the algorithm changes. For example, applying the first rule is really fast if the number of words differs. And, the second rule reduces the number of comparisons, specially if there are many words in the compared texts. That's explained with examples later.

Second rule: weighting the similarity of each compared pair

I also weighted the longer words higher than the shorter words to get the global similarity index. My algorithm takes the longest of the two words in the compared pair, and gives a higher weight to the pair with the longer words than to the pair with the shorter ones, although not exactly proportional to the pair length.

Sample comparison: same order

With this example, which uses different number of words:

  • compare "Manchester United" to "Manchester Utd FC"

If the same order of the words in both inputs is guaranteed, you should compare these pairs:

Manchester United
Manchester Utd    FC

(Manchester,Manchester) (Utd,United) (FC: not compared)

Manchester     United
Manchester Utd FC

(Manchester,Manchester) (Utd: not compared) (United,FC)

           Machester United
Manchester Utd       FC

(Mancheter: not compared) (Manchester,Utd) (United,FC)

Obviously, the highest score would be for the first set of pairs.

Implementation

To compare words in the same order.

The string with the higher number of words is a fixed vector, shown as A,B,C,D,E in this example. Where v[0] is the word A, v[1] the word B and so on.

For the string with the lower number of words we need to create all the possible combination of indexes that can be compared with the firs set. In this case, the string with lower number of words is represented by a,b,c.

You can use a simple loop to create all the vectors that represents the pairs to be compared like so

A,B,C,D,E   A,B,C,D,E   A,B,C,D,E   A,B,C,D,E   A,B,C,D,E   A,B,C,D,E
a,b,c       a,b,  c     a,b,    c   a,  b,c     a,  b,  c   a,    b,c
0 1 2       0 1   3     0 1     4   0   2 3     0   2   4   0     3 4

A,B,C,D,E   A,B,C,D,E   A,B,C,D,E   A,B,C,D,E
  a,b,c       a,b,  c     a,  b,c       a,b,c
  1 2 3       1 2   4     1   3 4       2 3 4

The numbers in the sample, are vectors that have the indices of the first set of words which must be comapred with the indices in the first set. i.e. v[0]=0, means compare index 0 of the short set (a) to index 0 of the long set (A), v[1]=2 means compare index 1 of the short (b) set to index 2 of the long set (C), and so on.

To calculate this vectors, simply start with 0,1,2. Move to the right the latest index that can be moved until it can no longer be moved:

Strat by moving the last one:

0,1,2 -> 0,1,3 -> 0,1,4 
No more moves possible, move the previous index, and restore the others
to the lowest possible values (move 1 to 2, restore 4 to 3)

When the last can't be move any further, move the one before the last, and reset the last to the nearest possible place (1 moved to 2, and 4 move to 3):

0,2,3 -> 0,2,4
No more moves possible of the last, move the one before the last

Move the one before the last again.

0,3,4
No more moves possible of the last, move the one before the last
Not possible, move the one before the one before the last, and reset the others:

Move the previous one:

1,2,3 -> 1,2,4

And so on. See the picture

When you have all the possible combinations you can compare the defined pairs.

Third rule: minimum similarity to stop comparison

Stop comparison when minimun similarity is reached: depending on what you want to do it's possible that you can set a thresold that, when it's reached, stops the comparison of the pairs.

If you can't set a thresold, at least you can always stop if you get a 100% similarity for each pair of words. This allows to spare a lot of time.

On some occasions you can simply decide to stop the comparison if the similarity is at least, something like 75%. This can be used if you want to show the user all the strings which are similar to the one provided by the user.

Sample: comparison with change of the order of the words

If there can be changes in the order, you need to compare each word of the first set with each word of the second set, and take the highest scores for the combinations of results, which include all the words of the shortest pair ordered in all the possible ways, compared to different words of the second pair. For this you can populate the upper or lower triangle of a matrix of (n X m) elements, and then take the required elements from the matrix.

Fourth rule: normalization

You must also normalize the word before comparison, like so:

  • if not case-sensitive convert all the words to upper or lower case
  • if not accent sensitive, remove accents in all the words
  • if you know that there are usual abbreviations, you can also normalized them, to the abbreviation to speed it up (i.e. convert united to utd, not utd to united)

Caching for optimization

To optmize the procedure, I cached whichever I could, i.e. the comparison vectors for different sizes, like the vectors 0,1,2-0,1,3,-0,1,4-0,2,3, in the A,B,C,D,E to a,b,c comparison example: all comparisons for lengths 3,5 would be calculated on first use and recycled for all the 3 words to 5 words incoming comparisons.

Other algorithms

I tried Hamming distance and the results were less accurate.

You can do much more complex things like semantic comparisons, phonetic comparisons, consider that some letters are just the same (like b and v, for several languages, like spanish, where ther is no distinction). Some of this things are very easy to implemente and others are really difficult.

NOTE: I didn't include the implementation of Levenhstein distance, because you can easyly find it implemented on differente laguages

like image 23
JotaBe Avatar answered Oct 02 '22 16:10

JotaBe