I have text stored in a python string.
What I Want
Keep in mind...
What I Have
I'm already using nltk to break the corpus into tokens and remove stopwords:
# split across any non-word character
tokenizer = nltk.tokenize.RegexpTokenizer('[^\w\']+', gaps=True)
# tokenize
tokens = tokenizer.tokenize(text)
# remove stopwords
tokens = [w for w in tokens if not w in nltk.corpus.stopwords.words('english')]
I'm aware of the BigramCollocationFinder and TrigramCollectionFinder which does exaclty what I'm looking for for those two cases.
The Question
I need advice for n-grams of higher order, improving the kinds of results that come from BCF and TCF, and advice on the best way to identify the most unique individual key words.
Many thanks!
As for the best way to identify the most unique individual key words, tfidf is the total measure. So, you have somehow to integrate a search engine ( or make a simple custom inverted index that is dynamic and holds term frequencies, document frequencies ) as to calculate tfidf efficiently and on-the-fly.
As for your N-grams, why don't you create a custom parser using a "window" approach ( window is of length N) that identifies, say, the most frequent of them? ( just keep every N-gram as a key in a dictionary with value either the frequency or a score (based on tfidf of individual terms))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With