Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Algorithm using soundex() or metaphone() to create Mad Gab style phrases

I'm attempting to create an algorithm that will suggest Mad Gab style phrases.

The input is a set of phrases. I also have a set of keywords that I'd like to use when possible. Currently, my solution is simply brute force:

  • loop over phrases (character by character)
    • if keyword is found
      • store keyword and branch (recursion)
    • increment character count

However, the problems I am running into are:

  • Account for compound keywords, e.g. "catches" can be "catches", "cat" + "cheeses"
  • Allow literal terms - "the", "and", "one", "two", "three".
  • How to suggest terms that are not keywords. i.e. fall back on something like the system dictionary when keywords or literals can not be found.
  • Skip phrase segments. Right now it just does one pass through. But consider the case where the phrase starts with something unmatched but a few characters later contains matches.

I am most familiar with PHP and MySQL. However, I am open to another technology if it provides a better solution.

I am also interested in any additional suggestions. Particularly ways to use the second parameter of metaphone() to make harder suggestions.

like image 908
Jason McCreary Avatar asked Mar 19 '12 20:03

Jason McCreary


2 Answers

Perhaps start with a syllable division algorithm on the phrase bank. You can use even a simple resource that teaches children to divide syllables to create your rough divider method:

http://www.ewsdonline.org/education/components/scrapbook/default.php?sectiondetailid=7584

If you want a more technical, completely accurate way, there was a Ph.D. dissertation about how to do it:

http://www.tug.org/docs/liang/

Then turn each syllable into a phonetic representation using either something you roll yourself or metaphone(). You can use a similar site that explains vowel sound rules. These will only be generalizations. You will process vowels separately from consonants if you roll your own. Metaphone just uses consonants, which is fine, but not as cool as if you also took into account vowels.

Vowels: http://www.eslgold.com/pronunciation/english_vowel_sounds.html Consonants: http://usefulenglish.ru/phonetics/english-consonant-sounds

Then, you have a dictionary of English words for your word bank. There are many open-source dictionaries available that you could stick into a MySQL table.

Start with the first syllable and look for a random word in the dictionary that matches the soundex test. If you can't find one (this will generally only find one syllable words) add the additional syllable and search again.

Example:

"Logical consequence"

A. Syllable split

"lo gi cal con se quence"

B. Vowel Sounds applied

"lah gee cahl con see quince"

C. Consonant Sounds applied

"lah jee kahl kon see quinse"

D. Soundtext test (one syllable soundex -obviously too easy to guess, but it proves the concept)

"Law Gee Call Con Sea Quints"

Soundex strcmp's return a number. So if you like, you could get the soundex values of everything in your word bank in advance. Then you can quickly run the strcmp.

An example of a Soundex MySQL comparison is:

select strcmp(soundex('lah'), soundex('law'));

I think using the MySQL soundex is easier for you than the PHP soundex test if you're wanting a random result from a big database and you've already captured the soundex value in a field in your dictionary table.

My suggestion may be inefficient, but optimization is a different question.

Update:

I didn't mean to imply that my solution would only yield one syllable words. I used one syllable as the example, but if you took two of the syllables together, you'd get multi-syllable matches. In fact, you could probably just start by jamming all the syllables together and running soundex in mysql. If you find an answer, great. But then you can roll off syllables until you get the longest match you can. Then you're left with the end of the phrase and can take those together and run a match. I think that's the essence of the solution below from the other contributor, but I think you need to avoid jamming all the letters together without spaces. In English, you'd lose information that way. Think of a phrase beginning with a "th" sound. If you jam the phrase together, you lose which "th" sound is needed. "Theremin" (the instrument) has a different "th" sound than "There, a man".

like image 97
Jonathan Barlow Avatar answered Oct 07 '22 03:10

Jonathan Barlow


Taking a different tack from Jonathan Barlow's solution, I recommend an O(n2) algorithm that gives you the properties you seek, in randomness, robustness, and scalable difficulty. The complexity of this algorithm can be further improved in constant time or with optimizations to the modality of the search, but because the size of your input phrases is guaranteed to be small, it's not that big a deal.

  1. Construct a hash table of all known words in the Oxford English Dictionary and a map of lists of words by soundex() value. This initially sounds intractable, until you realize that there aren't actually that many of them in current use. Assuming a decent one-way hashing algorithm, this should take several megabytes, tops.

  2. Consider the words in your input phrase as a single, compressed string of characters with no word identity whatsoever, discarding whitespace and all punctuation. From this, walk the space for all character lengths, starting with a length of one, up to the full length of the amalgamated phrase minus one. For each string produced by this walk, perform a hash lookup against OED. When a word is encountered that's present in the dictionary, append its word and position to the end of a list in memory.

    (This pass will always take sum(n) time, which is by definition 0.5n(n+1). So, O(n2) it is. Its space complexity is worst-case O(n2), but in practice, a fully connected set of terms is extremely unlikely.)

  3. Now comes your difficulty slider. From the produced list, chop off the first N% of the found terms, where N is your level of difficulty. The principle here is, smaller words are easier for someone to lexically process, while longer words are more difficult to sound out and differentiate.

  4. Construct an array conforming to the original length of the phrase (without spaces and punctuation) and shuffle your list of encountered words. Now, walk the shuffled list. For each element, verify if all of the slots in the array are free for that word at its original position. If they are, keep the word and its position, marking the slots as used in the array. If they are not, iterate to the next word until the list is exhausted.*

  5. From the final output array, construct a partitioned list of unused characters in the space, treating each bag of characters as its own phrase. For this list, perform syllable detection exactly as sketched out here, passing the results to metaphone() with a percentage chance of glomming two or more syllables together. Then, for the bag of output dictionary words from 4., perform soundex(), pulling a random word from the word's mapped list of comparable soundex values. For every word that can only soundex() to itself according to the backing map of lists, perform partitioning and metaphone(). Finally, stitch the two lists of results together by sorting on position and print your result.

This is a random algorithm with what I believe to be all of the desired properties, but it's still rough in my mind.


 * Extra credit: determine the allowed overlaps for your system by character or syllable. This can make for an even larger gamut of accepted output phrases and a much higher level of difficulty.

like image 20
MrGomez Avatar answered Oct 07 '22 04:10

MrGomez