I'm working with an API that maps my GTIN/EAN queries to product data.
Since the data returned originates from merchant product feeds, the following is almost universally the case:
I'm looking for a programmatic way to either
Consider the following example results for a single EAN query:
My homebrew approach looks like this:
word => global occurence
"Nivea" => 7
"Deo" => 5
"Deoroller" => 2
…
"VE" => 1
"Nivea Deo" => 12
"Nivea Deoroller VE" => 10
"Nivea Deo" => 6
"Nivea Deoroller VE" => 3.34
Obviously, my approach is pretty basic, error-prone and biased towards short sentences with frequently used words – yielding more or less satisfactory results.
Since your existing metric seems to bias towards shorter phrases, you should consider factoring in bigrams into the mix. So instead of considering scores for just individual words, additionally consider the score for consecutive pairs of words as well (e.g. 'nivea deo', deo roll-on', 'roll-on dry', etc). When computing the score for each title, factor in the scores for every unigram and bigram you can generate out of the title together, but maybe give the bigrams more weight, and this should encourage your algorithm to prefer longer phrases.
If you have large existing corpus of lots of names like these at your disposal, consider using something like TF-IDF
What you are doing right can be likened to just using TF. Using your global corpus, you can compute the idf of each unigram and bigram, which is basically a measure of unique or rare a word or phrase is across the entire corpus.
tf = the number of times you have seen an ngram within these results
idf = a global measure of how unique an ngram might be across all results (or atleast a very large number of them)
So when computing the score for a title, instead of simply adding up the tf's of each ngram in it, you add up the tf*idf of each ngram instead. Rarer ngrams (which possibly do a better job at distinguishing this item from all other items) have a higher idf, so your algorithm should give higher weight to them. A lot of junk terms (like Mindestabnahme) would have really high idf, but they would have a really small tf, so they might not make a big difference. Alternatively prune off tokens you see fewer than k times, to get rid of noise.
Another NLP trick to know about is Levenshtein distance .. which is a way to quantify how similar two strings are. You can compute the levenshtein distance between every pair of strings within your results, and then try preferring the result which has the lowest average distance from all the other strings. This might not work well by itself... but factoring this score in with your existing approach might help you navigate some tricky cases.
I'd:
For your example:
Convert all strings to lower (or upper) case
nivea deo roll-on dry impact for men nivea deo roll on dry/blau nivea deo roll-on dry impact for men, 50 ml, 3er pack (3 x 50 ml) nivea deo roll on dry/blau 50 ml nivea deoroller 50ml dry for men blau mindestabnahme: 6 stück (1 ve) nivea deoroller, dry impact for men nivea deo roll on dry/blau_50 ml
Do a multiple sequence alignment over all strings
nivea deo roll°°-on °°dry °°°°°°°°°°impact for men°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°° nivea deo roll°° on °°dry/blau°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°° nivea deo roll°°-on °°dry °°°°°°°°°°impact for men, 50 ml, 3er pack (3 x 50 ml)°°°°°°° nivea deo roll°° on °°dry/blau °°°°°°°°°°°°°°°°°°°°°50 ml°°°°°°°°°°°°°°°°°°°°°°°°°°°°° nivea deo°roller 50ml dry °°°°°°°°°°°°°°°°°for men blau mindestabnahme: 6 stück (1 ve) nivea deo°roller, °°°°dry °°°°°°°°°°impact for men°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°° nivea deo roll°° on °°dry/blau_50 ml°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
(where °
denotes a gap character)
Convert back to original case
Nivea Deo Roll°°-On °°Dry °°°°°°°°°°Impact for Men°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°° NIVEA DEO Roll°° on °°Dry/blau°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°° Nivea Deo Roll°°-On °°Dry °°°°°°°°°°Impact for Men, 50 ml, 3er Pack (3 x 50 ml)°°°°°°° Nivea Deo Roll°° on °°Dry/blau °°°°°°°°°°°°°°°°°°°°°50 ml°°°°°°°°°°°°°°°°°°°°°°°°°°°°° Nivea Deo°roller 50ml dry °°°°°°°°°°°°°°°°°for Men blau Mindestabnahme: 6 Stück (1 VE) NIVEA Deo°roller, °°°°Dry °°°°°°°°°°Impact for Men°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°° NIVEA DEO Roll°° on °°Dry/blau_50 ml°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
Find the most frequent letter in every column
Nivea Deo Roll°° on °°Dry °°°°°°°°°°°°°°°°°for Men°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
Remove the gaps
Nivea Deo Roll on Dry for Men
Everything except step 2 (multiple sequence alignment) is straightforward. Multiple sequence alignment is commonly used in bioinformatics, see eg. here or here or here... You can certainly find C or Java code but I'm not sure about PHP.
Update:
To get you started with multiple alignment, so called "star alignment" is basically a combination of pairwise alignments of one sequence (the "star center") with every other. A pairwise alignment is typically computed using dynamic programming, see e.g. here or here. To produce multiple alignment, you choose one of your strings to be the star center. You find the pairwise alignments between it and every other string and then align the pairwise alignments by introducing gaps into them so that the star centers in all alignments are perfectly aligned. You can repeat the procedure by using the result of the previous step as the star center for the next one until it converges, i.e. the result does not change.
Update 2:
You can also use whole words as symbols (quants, atoms) to align. Say, A = nivea
, B = deo
etc. This has following advantages:
In your example (try it yourself) this would lead to
Nivea Deo Roll on Dry for Men 50 ml
i.e. we get here also the 50 ml
. This is because skipping two-letter words is not cheaper than skipping 20-letter words.
If I understood right you are not matching those names against existing database, but trying to get as close as possible to the actual name of the product, so here's my idea:
Using your example it would work like this:
Step 1:
Nivea Deo Roll-On Dry Impact for Men
NIVEA DEO Roll on Dry/blau
Nivea Deo Roll-On Dry Impact for Men, 50 ml, 3er Pack (3 x 50 ml)
Nivea Deo Roll on Dry/blau 50 ml
Nivea Deoroller 50ml dry for Men blau Mindestabnahme: 6 Stück (1 VE)
NIVEA Deoroller, Dry Impact for Men
NIVEA DEO Roll on Dry/blau_50 ml
Step 2:
Nivea, Deo, Roll-On, Dry, Impact, for, Men
NIVEA, DEO, Roll, on, Dry/blau
Nivea, Deo, Roll-On, Dry, Impact, for, Men, 50, ml, 3er, Pack, (3, x, 50, ml)
Nivea, Deo, Roll, on, Dry/blau, 50, ml
Nivea, Deoroller, 50ml, dry, for, Men, blau, Mindestabnahme:, 6, Stück, (1, VE)
NIVEA, Deoroller, Dry, Impact, for, Men
NIVEA, DEO, Roll, on, Dry/blau_50, ml
Step 3:
nivea, deo, roll-on, roll, on, dry, impact, for, men
nivea, deo, roll, on, dry/blau, dry, blau
nivea, deo, roll-on, roll, on, dry, impact, for, men, 50, ml, 3er, pack, 3, x, 50, ml
nivea, deo, roll, on, dry, blau, 50, ml
nivea, deoroller, 50ml, 50, ml, dry, for, men, blau, mindestabnahme, 6, stück, 1, ve
nivea, deoroller, dry, impact, for, men
nivea, deo, roll, on, dry/blau, dry, blau, 50, ml
Step 4: (assuming threshold of 60%)
nivea, deo, roll, on, dry, for, men
nivea, deo, roll, on, dry, blau
nivea, deo, roll, on, dry, for, men, 50, ml
nivea, deo, roll, on, dry, 50, ml
nivea, 50, ml, dry, for, men, blau
nivea, dry, for, men
nivea, deo, roll, on, dry, blau, 50, ml
Step 5:
nivea => 1
deo => 2
roll => 3
on => 4
dry => 5
for => 6
men => 7
blau => 6
50 => 8, 6, 2, 7
ml => 8, 7, 3, 8
Step 6:
nivea => 1
deo => 2
roll => 3
on => 4
dry => 5
blau => 6
for => 7
men => 8
50 => 9
ml => 10
Step 7
Nivea, Deo, Roll, on, Dry, blau for, Men, 50, ml
Step 8
end result: Nivea Deo Roll on Dry blau for Men 50 ml
This method has following problems:
You have a very good NLP problem. I have worked a similar one about an year ago. I would also recommend the approach by adi92. But in case you need to use any NLP software I suggest the Stanford NLP. The software and the respective publications are also present here. http://nlp.stanford.edu/ Hope this helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With