Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Simple spell checking algorithm

I've been tasked with creating a simple spell checker for an assignment but have given next to no guidance so was wondering if anyone could help me out. I'm not after someone to do the assignment for me, but any direction or help with the algorithm would be awesome! If what I'm asking is not within the guildlines of the site then I'm sorry and I'll look elsewhere. :)

The project loads correctly spelled lower case words and then needs to make spelling suggestions based on two criteria:

  • One letter difference (either added or subtracted to get the word the same as a word in the dictionary). For example 'stack' would be a suggestion for 'staick' and 'cool' would be a suggestion for 'coo'.

  • One letter substitution. So for example, 'bad' would be a suggestion for 'bod'.

So, just to make sure I've explained properly.. I might load in the words [hello, goodbye, fantastic, good, god] and then the suggestions for the (incorrectly spelled) word 'godd' would be [good, god].

Speed is my main consideration here so while I think I know a way to get this work, I'm really not too sure about how efficient it'll be. The way I'm thinking of doing it is to create a map<string, vector<string>> and then for each correctly spelled word that's loaded in, add the correctly spelled work in as a key in the map and the populate the vector to be all the possible 'wrong' permutations of that word.

Then, when I want to look up a word, I'll look through every vector in the map to see if that word is a permutation of one of the correctly spelled word. If it is, I'll add the key as a spelling suggestion.

This seems like it would take up HEAPS of memory though, cause there would surely be thousands of permutations for each word? It also seems like it'd be very very slow if my initial dictionary of correctly spelled words was large?

I was thinking that maybe I could cut down time a bit by only looking in the keys that are similar to the word I'm looking at. But then again, if they're similar in some way then it probably means that the key will be a suggestion meaning I don't need all those permutations!

So yeah, I'm a bit stumped about which direction I should look in. I'd really appreciate any help as I really am not sure how to estimate the speed of the different ways of doing things (we haven't been taught this at all in class).

like image 936
Sam Avatar asked Oct 18 '11 10:10

Sam


People also ask

What algorithm does spell check use?

Spell checkers can use approximate string matching algorithms such as Levenshtein distance to find correct spellings of misspelled words. An alternative type of spell checker uses solely statistical information, such as n-grams, to recognize errors instead of correctly-spelled words.

How do you spell check in NLP?

Spell Checker is a sequence-to-sequence pipeline that detects and corrects spelling errors in your input text. It's based on Levenshtein Automaton for generating candidate corrections and a Neural Language Model for ranking corrections. You can download the pretrained pipeline that comes ready to use.

What are the steps in spell checking a document?

On the Review tab, click Spelling & Grammar. If Word finds a potential error, the Spelling & Grammar dialog box will open, spelling errors will be shown as red text, and grammatical errors will be shown as green text. To fix an error, do one of the following: Type the correction in the box and then click Change.

Does Google have a spell checker?

You can check your spelling and grammar in Google Docs, then accept or ignore the corrections. Spelling and grammar suggestions are available in English, Spanish, French, German, Portuguese, and Italian.


2 Answers

The simpler way to solve the problem is indeed a precomputed map [bad word] -> [suggestions].

The problem is that while the removal of a letter creates few "bad words", for the addition or substitution you have many candidates.

So I would suggest another solution ;)

Note: the edit distance you are describing is called the Levenshtein Distance

The solution is described in incremental step, normally the search speed should continuously improve at each idea and I have tried to organize them with the simpler ideas (in term of implementation) first. Feel free to stop whenever you're comfortable with the results.


0. Preliminary

  • Implement the Levenshtein Distance algorithm
  • Store the dictionnary in a sorted sequence (std::set for example, though a sorted std::deque or std::vector would be better performance wise)

Keys Points:

  • The Levenshtein Distance compututation uses an array, at each step the next row is computed solely with the previous row
  • The minimum distance in a row is always superior (or equal) to the minimum in the previous row

The latter property allow a short-circuit implementation: if you want to limit yourself to 2 errors (treshold), then whenever the minimum of the current row is superior to 2, you can abandon the computation. A simple strategy is to return the treshold + 1 as the distance.


1. First Tentative

Let's begin simple.

We'll implement a linear scan: for each word we compute the distance (short-circuited) and we list those words which achieved the smaller distance so far.

It works very well on smallish dictionaries.


2. Improving the data structure

The levenshtein distance is at least equal to the difference of length.

By using as a key the couple (length, word) instead of just word, you can restrict your search to the range of length [length - edit, length + edit] and greatly reduce the search space.


3. Prefixes and pruning

To improve on this, we can remark than when we build the distance matrix, row by row, one world is entirely scanned (the word we look for) but the other (the referent) is not: we only use one letter for each row.

This very important property means that for two referents that share the same initial sequence (prefix), then the first rows of the matrix will be identical.

Remember that I ask you to store the dictionnary sorted ? It means that words that share the same prefix are adjacent.

Suppose that you are checking your word against cartoon and at car you realize it does not work (the distance is already too long), then any word beginning by car won't work either, you can skip words as long as they begin by car.

The skip itself can be done either linearly or with a search (find the first word that has a higher prefix than car):

  • linear works best if the prefix is long (few words to skip)
  • binary search works best for short prefix (many words to skip)

How long is "long" depends on your dictionary and you'll have to measure. I would go with the binary search to begin with.

Note: the length partitioning works against the prefix partitioning, but it prunes much more of the search space


4. Prefixes and re-use

Now, we'll also try to re-use the computation as much as possible (and not just the "it does not work" result)

Suppose that you have two words:

  • cartoon
  • carwash

You first compute the matrix, row by row, for cartoon. Then when reading carwash you need to determine the length of the common prefix (here car) and you can keep the first 4 rows of the matrix (corresponding to void, c, a, r).

Therefore, when begin to computing carwash, you in fact begin iterating at w.

To do this, simply use an array allocated straight at the beginning of your search, and make it large enough to accommodate the larger reference (you should know what is the largest length in your dictionary).


5. Using a "better" data structure

To have an easier time working with prefixes, you could use a Trie or a Patricia Tree to store the dictionary. However it's not a STL data structure and you would need to augment it to store in each subtree the range of words length that are stored so you'll have to make your own implementation. It's not as easy as it seems because there are memory explosion issues which can kill locality.

This is a last resort option. It's costly to implement.

like image 147
Matthieu M. Avatar answered Sep 23 '22 01:09

Matthieu M.


You should have a look at this explanation of Peter Norvig on how to write a spelling corrector .

How to write a spelling corrector

EveryThing is well explain in this article, as an example the python code for the spell checker looks like this :

import re, collections

def words(text): return re.findall('[a-z]+', text.lower()) 

def train(features):
    model = collections.defaultdict(lambda: 1)
    for f in features:
        model[f] += 1
    return model

NWORDS = train(words(file('big.txt').read()))

alphabet = 'abcdefghijklmnopqrstuvwxyz'

def edits1(word):
   splits     = [(word[:i], word[i:]) for i in range(len(word) + 1)]
   deletes    = [a + b[1:] for a, b in splits if b]
   transposes = [a + b[1] + b[0] + b[2:] for a, b in splits if len(b)>1]
   replaces   = [a + c + b[1:] for a, b in splits for c in alphabet if b]
   inserts    = [a + c + b     for a, b in splits for c in alphabet]
   return set(deletes + transposes + replaces + inserts)

def known_edits2(word):
    return set(e2 for e1 in edits1(word) for e2 in edits1(e1) if e2 in NWORDS)

def known(words): return set(w for w in words if w in NWORDS)

def correct(word):
    candidates = known([word]) or known(edits1(word)) or known_edits2(word) or [word]
    return max(candidates, key=NWORDS.get)

Hope you can find what you need on Peter Norvig website.

like image 40
Ricky Bobby Avatar answered Sep 22 '22 01:09

Ricky Bobby