Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Identifying important words and phrases in text

Tags:

python

nlp

nltk

I have text stored in a python string.

What I Want

  1. To identify key words in that text.
  2. to identify N-grams in that text (ideally more than just bi and tri grams).

Keep in mind...

  • The text might be small (e.g. tweet sized)
  • The text might be middle (e.g. news article sized)
  • The text might be large (e.g. book or chapter sized)

What I Have

I'm already using nltk to break the corpus into tokens and remove stopwords:

    # split across any non-word character
    tokenizer = nltk.tokenize.RegexpTokenizer('[^\w\']+', gaps=True)

    # tokenize
    tokens = tokenizer.tokenize(text)

    # remove stopwords
    tokens = [w for w in tokens if not w in nltk.corpus.stopwords.words('english')]

I'm aware of the BigramCollocationFinder and TrigramCollectionFinder which does exaclty what I'm looking for for those two cases.

The Question

I need advice for n-grams of higher order, improving the kinds of results that come from BCF and TCF, and advice on the best way to identify the most unique individual key words.

Many thanks!

like image 795
slifty Avatar asked Oct 06 '11 16:10

slifty


1 Answers

As for the best way to identify the most unique individual key words, tfidf is the total measure. So, you have somehow to integrate a search engine ( or make a simple custom inverted index that is dynamic and holds term frequencies, document frequencies ) as to calculate tfidf efficiently and on-the-fly.

As for your N-grams, why don't you create a custom parser using a "window" approach ( window is of length N) that identifies, say, the most frequent of them? ( just keep every N-gram as a key in a dictionary with value either the frequency or a score (based on tfidf of individual terms))

like image 185
hymloth Avatar answered Oct 25 '22 15:10

hymloth