Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Find subject in incomplete sentence with NLTK

Tags:

python

nlp

nltk

I have a list of products that I am trying to classify into categories. They will be described with incomplete sentences like:

"Solid State Drive Housing"

"Hard Drive Cable"

"1TB Hard Drive"

"500GB Hard Drive, Refurbished from Manufacturer"

How can I use python and NLP to get an output like "Housing, Cable, Drive, Drive", or a tree that describes which word is modifying which? Thank you in advance

like image 235
Jmjmh Avatar asked Jan 12 '12 20:01

Jmjmh


3 Answers

NLP techniques are relatively ill equipped to deal with this kind of text.

Phrased differently: it is quite possible to build a solution which includes NLP processes to implement the desired classifier but the added complexity doesn't necessarily pays off in term of speed of development nor classifier precision improvements.
If one really insists on using NLP techniques, POS-tagging and its ability to identify nouns is the most obvious idea, but Chunking and access to WordNet or other lexical sources are other plausible uses of NLTK.

Instead, an ad-hoc solution based on simple regular expressions and a few heuristics such as these suggested by NoBugs is probably an appropriate approach to the problem. Certainly, such solutions bear two main risks:

  • over-fitting to the portion of the text reviewed/considered in building the rules
  • possible messiness/complexity of the solution if too many rules and sub-rules are introduced.

Running some plain statical analysis on the complete (or very big sample) of the texts to be considered should help guide the selection of a few heuristics and also avoid the over-fitting concerns. I'm quite sure that a relatively small number of rules, associated with a custom dictionary should be sufficient to produce a classifier with appropriate precision as well as speed/resources performance.

A few ideas:

  • count all the words (and possibly all the bi-grams and tri-grams) in a sizable portion of the corpus a hand. This info can drive the design of the classifier by allowing to allocate the most effort and the most rigid rules to the most common patterns.
  • manually introduce a short dictionary which associates the most popular words with:
    • their POS function (mostly a binary matter here: i.e. nouns vs. modifiers and other non-nouns.
    • their synonym root [if applicable]
    • their class [if applicable]
  • If the pattern holds for most of the input text, consider using the last word before the end of text or before the first comma as the main key to class selection. If the pattern doesn't hold, just give more weight to the first and to the last word.
  • consider a first pass where the text is re-written with the most common bi-grams replaced by a single word (even an artificial code word) which would be in the dictionary
  • consider also replacing the most common typos or synonyms with their corresponding synonym root. Adding regularity in the input helps improve precision and also help making a few rules / a few entries in the dictionary have a big return on precision.
  • for words not found in dictionary, assume that words which are mixed with numbers and/or preceded by numbers are modifiers, not nouns. Assume that the
  • consider a two-tiers classification whereby inputs which cannot be plausibly assigned a class are put in the "manual pile" to prompt additional review which results in additional of rules and/or dictionary entries. After a few iterations the classifier should require less and less improvements and tweaks.
  • look for non-obvious features. For example some corpora are made from a mix of sources but some of the sources, may include particular regularities which help identify the source and/or be applicable as classification hints. For example some sources may only contains say uppercase text (or text typically longer than 50 characters, or truncated words at the end etc.)

I'm afraid this answer falls short of providing Python/NLTK snippets as a primer towards a solution, but frankly such simple NLTK-based approaches are likely to be disappointing at best. Also, we should have a much bigger sample set of the input text to guide the selection of plausible approaches, include ones that are based on NLTK or NLP techniques at large.

like image 194
mjv Avatar answered Nov 18 '22 08:11

mjv


pip install spacy

python -m spacy download en import spacy

nlp = spacy.load('en')
sent = "INCOMEPLETE SENTENCE HERE"
doc=nlp(sent)
sub_toks = [tok for tok in doc if (tok.dep_ == "ROOT") ]

Examples:

sent = "Solid State Drive Housing"
doc=nlp(sent)
sub_toks = [tok for tok in doc if (tok.dep_ == "ROOT") ]

output: [Housing]

sent = "Hard Drive Cable"
doc=nlp(sent)
sub_toks = [tok for tok in doc if (tok.dep_ == "ROOT") ]

output: [Cable]

sent = "1TB Hard Drive"
doc=nlp(sent)
sub_toks = [tok for tok in doc if (tok.dep_ == "ROOT") ]

output: [Drive]

sent = "500GB Hard Drive, Refurbished from Manufacturer"
doc=nlp(sent)
sub_toks = [tok for tok in doc if (tok.dep_ == "ROOT") ]

output: [Drive]

like image 2
REEP Avatar answered Nov 18 '22 08:11

REEP


I would create a list of nouns, either manually, with all nouns you're looking for, or parse a dictionary such as this one. Filtering all but the nouns would at least get you to "State Drive", "Drive Cable", or "Drive", ignoring everything after the first punctuation mark.

like image 1
NoBugs Avatar answered Nov 18 '22 08:11

NoBugs