Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Choose or generate canonical variant from multiple sentences

I'm working with an API that maps my GTIN/EAN queries to product data.

Since the data returned originates from merchant product feeds, the following is almost universally the case:

  • Multiple results per GTIN
  • Products' titles are pretty much unstructured
  • Products' titles are "polluted" with
    • SEO-related stuff,
    • information about the quantity contained,
    • "buy two, get one free" offers,
    • etc.

I'm looking for a programmatic way to either

  • choose the "cleanest"/most canonical version available
  • or generate a new one that represents the "lowest common denominator".

Consider the following example results for a single EAN query:

  • Nivea Deo Roll-On Dry Impact for Men
  • NIVEA DEO Roll on Dry/blau
  • Nivea Deo Roll-On Dry Impact for Men, 50 ml, 3er Pack (3 x 50 ml)
  • Nivea Deo Roll on Dry/blau 50 ml
  • Nivea Deoroller 50ml dry for Men blau Mindestabnahme: 6 Stück (1 VE)
  • NIVEA Deoroller, Dry Impact for Men
  • NIVEA DEO Roll on Dry/blau_50 ml

My homebrew approach looks like this:

  • Basic cleanup:
    • Lowercase the titles,
    • strip excessive whitespace,
    • throw out apparent stopwords such as "buy" and "click"
  • Build an array for word => global occurence
    • "Nivea" => 7
    • "Deo" => 5
    • "Deoroller" => 2
    • "VE" => 1
  • Calculate the "cumulative word value" for each of the titles
    • "Nivea Deo" => 12
    • "Nivea Deoroller VE" => 10
  • Divide the cumulative value by the length of the title, resulting in a score
    • "Nivea Deo" => 6
    • "Nivea Deoroller VE" => 3.34

Obviously, my approach is pretty basic, error-prone and biased towards short sentences with frequently used words – yielding more or less satisfactory results.

  • Would you choose a different approach?
  • Is there some NLP magic way to take care of the problem that I don't know of?
like image 649
vzwick Avatar asked Jun 01 '12 20:06

vzwick


4 Answers

Since your existing metric seems to bias towards shorter phrases, you should consider factoring in bigrams into the mix. So instead of considering scores for just individual words, additionally consider the score for consecutive pairs of words as well (e.g. 'nivea deo', deo roll-on', 'roll-on dry', etc). When computing the score for each title, factor in the scores for every unigram and bigram you can generate out of the title together, but maybe give the bigrams more weight, and this should encourage your algorithm to prefer longer phrases.

If you have large existing corpus of lots of names like these at your disposal, consider using something like TF-IDF
What you are doing right can be likened to just using TF. Using your global corpus, you can compute the idf of each unigram and bigram, which is basically a measure of unique or rare a word or phrase is across the entire corpus.
tf = the number of times you have seen an ngram within these results
idf = a global measure of how unique an ngram might be across all results (or atleast a very large number of them)
So when computing the score for a title, instead of simply adding up the tf's of each ngram in it, you add up the tf*idf of each ngram instead. Rarer ngrams (which possibly do a better job at distinguishing this item from all other items) have a higher idf, so your algorithm should give higher weight to them. A lot of junk terms (like Mindestabnahme) would have really high idf, but they would have a really small tf, so they might not make a big difference. Alternatively prune off tokens you see fewer than k times, to get rid of noise.

Another NLP trick to know about is Levenshtein distance .. which is a way to quantify how similar two strings are. You can compute the levenshtein distance between every pair of strings within your results, and then try preferring the result which has the lowest average distance from all the other strings. This might not work well by itself... but factoring this score in with your existing approach might help you navigate some tricky cases.

like image 69
Aditya Mukherji Avatar answered Oct 30 '22 05:10

Aditya Mukherji


I'd:

  1. Convert all strings to lower (or upper) case
  2. Do a multiple sequence alignment over all strings
  3. Convert back to original case
  4. Find the most frequent letter in every column
  5. Remove the gaps

For your example:

  1. Convert all strings to lower (or upper) case

    nivea deo roll-on dry impact for men
    nivea deo roll on dry/blau
    nivea deo roll-on dry impact for men, 50 ml, 3er pack (3 x 50 ml)
    nivea deo roll on dry/blau 50 ml
    nivea deoroller 50ml dry for men blau mindestabnahme: 6 stück (1 ve)
    nivea deoroller, dry impact for men
    nivea deo roll on dry/blau_50 ml
    
  2. Do a multiple sequence alignment over all strings

    nivea deo roll°°-on °°dry °°°°°°°°°°impact for men°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
    nivea deo roll°° on °°dry/blau°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
    nivea deo roll°°-on °°dry °°°°°°°°°°impact for men, 50 ml, 3er pack (3 x 50 ml)°°°°°°°
    nivea deo roll°° on °°dry/blau °°°°°°°°°°°°°°°°°°°°°50 ml°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
    nivea deo°roller 50ml dry °°°°°°°°°°°°°°°°°for men blau mindestabnahme: 6 stück (1 ve)
    nivea deo°roller, °°°°dry °°°°°°°°°°impact for men°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
    nivea deo roll°° on °°dry/blau_50 ml°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
    

    (where ° denotes a gap character)

  3. Convert back to original case

    Nivea Deo Roll°°-On °°Dry °°°°°°°°°°Impact for Men°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
    NIVEA DEO Roll°° on °°Dry/blau°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
    Nivea Deo Roll°°-On °°Dry °°°°°°°°°°Impact for Men, 50 ml, 3er Pack (3 x 50 ml)°°°°°°°
    Nivea Deo Roll°° on °°Dry/blau °°°°°°°°°°°°°°°°°°°°°50 ml°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
    Nivea Deo°roller 50ml dry °°°°°°°°°°°°°°°°°for Men blau Mindestabnahme: 6 Stück (1 VE)
    NIVEA Deo°roller, °°°°Dry °°°°°°°°°°Impact for Men°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
    NIVEA DEO Roll°° on °°Dry/blau_50 ml°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
    
  4. Find the most frequent letter in every column

    Nivea Deo Roll°° on °°Dry °°°°°°°°°°°°°°°°°for Men°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
    
  5. Remove the gaps

    Nivea Deo Roll on Dry for Men
    

Everything except step 2 (multiple sequence alignment) is straightforward. Multiple sequence alignment is commonly used in bioinformatics, see eg. here or here or here... You can certainly find C or Java code but I'm not sure about PHP.

Update:

To get you started with multiple alignment, so called "star alignment" is basically a combination of pairwise alignments of one sequence (the "star center") with every other. A pairwise alignment is typically computed using dynamic programming, see e.g. here or here. To produce multiple alignment, you choose one of your strings to be the star center. You find the pairwise alignments between it and every other string and then align the pairwise alignments by introducing gaps into them so that the star centers in all alignments are perfectly aligned. You can repeat the procedure by using the result of the previous step as the star center for the next one until it converges, i.e. the result does not change.

Update 2: You can also use whole words as symbols (quants, atoms) to align. Say, A = nivea, B = deo etc. This has following advantages:

  1. Words cannot be altered by the alignment and are treated equally, irrespectively of their lengths
  2. When doing pairwise alignment, you can assign costs for substituting individual words ("symbols") according to TF-IDF and their synonyms. Pairwise alignment tries to minimize (weighted) Levenshtein distance between the sequences.

In your example (try it yourself) this would lead to

Nivea Deo Roll on Dry for Men 50 ml

i.e. we get here also the 50 ml. This is because skipping two-letter words is not cheaper than skipping 20-letter words.

like image 43
Igor F. Avatar answered Oct 30 '22 05:10

Igor F.


If I understood right you are not matching those names against existing database, but trying to get as close as possible to the actual name of the product, so here's my idea:

  1. Don't do the usual cleanup - just remove extra whitespace - leave stop words, character case, separators (such as hyphen) intact.
  2. Split strings into terms on whitespace.
  3. Split terms on separators, number/char, character case, but don't remove originals, just add additional ones right after the splited term, lowercase all terms, remove duplicate and empty terms
  4. Remove terms that are not common for fixed % of the strings (if it should be 40 or 95% you'll know after testing) - This will remove most if not all stuff added by merchants.
  5. Find most common position for each term.
  6. If you get more than one term with same most common position check which one is more commonly before the other. Increase "looser" position by 1, repeat till there is no conflict
  7. For each term that is left pick the most common capitalization
  8. Merge remaining terms

Using your example it would work like this:

Step 1:

Nivea Deo Roll-On Dry Impact for Men

NIVEA DEO Roll on Dry/blau

Nivea Deo Roll-On Dry Impact for Men, 50 ml, 3er Pack (3 x 50 ml)

Nivea Deo Roll on Dry/blau 50 ml

Nivea Deoroller 50ml dry for Men blau Mindestabnahme: 6 Stück (1 VE)

NIVEA Deoroller, Dry Impact for Men

NIVEA DEO Roll on Dry/blau_50 ml

Step 2:

Nivea, Deo, Roll-On, Dry, Impact, for, Men

NIVEA, DEO, Roll, on, Dry/blau

Nivea, Deo, Roll-On, Dry, Impact, for, Men, 50, ml, 3er, Pack, (3, x, 50, ml)

Nivea, Deo, Roll, on, Dry/blau, 50, ml

Nivea, Deoroller, 50ml, dry, for, Men, blau, Mindestabnahme:, 6, Stück, (1, VE)

NIVEA, Deoroller, Dry, Impact, for, Men

NIVEA, DEO, Roll, on, Dry/blau_50, ml

Step 3:

nivea, deo, roll-on, roll, on, dry, impact, for, men

nivea, deo, roll, on, dry/blau, dry, blau

nivea, deo, roll-on, roll, on, dry, impact, for, men, 50, ml, 3er, pack, 3, x, 50, ml

nivea, deo, roll, on, dry, blau, 50, ml

nivea, deoroller, 50ml, 50, ml, dry, for, men, blau, mindestabnahme, 6, stück, 1, ve

nivea, deoroller, dry, impact, for, men

nivea, deo, roll, on, dry/blau, dry, blau, 50, ml

Step 4: (assuming threshold of 60%)

nivea, deo, roll, on, dry, for, men

nivea, deo, roll, on, dry, blau

nivea, deo, roll, on, dry, for, men, 50, ml

nivea, deo, roll, on, dry, 50, ml

nivea, 50, ml, dry, for, men, blau

nivea, dry, for, men

nivea, deo, roll, on, dry, blau, 50, ml

Step 5:

nivea => 1

deo => 2

roll => 3

on => 4

dry => 5

for => 6

men => 7

blau => 6

50 => 8, 6, 2, 7

ml => 8, 7, 3, 8

Step 6:

nivea => 1

deo => 2

roll => 3

on => 4

dry => 5

blau => 6

for => 7

men => 8

50 => 9

ml => 10

Step 7

Nivea, Deo, Roll, on, Dry, blau for, Men, 50, ml

Step 8

end result: Nivea Deo Roll on Dry blau for Men 50 ml

This method has following problems:

  1. It doesn't handle names with duplicate terms well (or at all - all term occurences after 1st one are lost)
  2. It's biased to split hyphen-separated or case-separated (ProductName) name parts into space separated ones - this can be fixed if you check most common version at step 8.
like image 24
c2h5oh Avatar answered Oct 30 '22 05:10

c2h5oh


You have a very good NLP problem. I have worked a similar one about an year ago. I would also recommend the approach by adi92. But in case you need to use any NLP software I suggest the Stanford NLP. The software and the respective publications are also present here. http://nlp.stanford.edu/ Hope this helps.

like image 22
Krishna Avatar answered Oct 30 '22 05:10

Krishna