Is there a way to find the frequency of the usage of a word in the English language using WordNet or NLTK using Python?
NOTE: I do not want the frequency count of a word in a given input file. I want the frequency count of a word in general based on the usage in today's time.
In WordNet, every Lemma has a frequency count that is returned by the method lemma. count() , and which is stored in the file nltk_data/corpora/wordnet/cntlist. rev .
Term frequency is how common a word is, inverse document frequency (IDF) is how unique or rare a word is. Example, Consider a document containing 100 words wherein the word apple appears 5 times. The term frequency (i.e., TF) for apple is then (5 / 100) = 0.05.
What is WordNET. WordNET is a lexical database of words in more than 200 languages in which we have adjectives, adverbs, nouns, and verbs grouped differently into a set of cognitive synonyms, where each word in the database is expressing its distinct concept.
In WordNet, every Lemma has a frequency count that is returned by the method
lemma.count()
, and which is stored in the file nltk_data/corpora/wordnet/cntlist.rev
.
Code example:
from nltk.corpus import wordnet
syns = wordnet.synsets('stack')
for s in syns:
for l in s.lemmas():
print l.name + " " + str(l.count())
Result:
stack 2
batch 0
deal 1
flock 1
good_deal 13
great_deal 10
hatful 0
heap 2
lot 13
mass 14
mess 0
...
However, many counts are zero and there is no information in the source file or in the documentation which corpus was used to create this data. According to the book Speech and Language Processing from Daniel Jurafsky and James H. Martin, the sense frequencies come from the SemCor corpus which is a subset of the already small and outdated Brown Corpus.
So it's probably best to choose the corpus that fits best to the your application and create the data yourself as Christopher suggested.
To make this Python3.x compatible just do:
Code example:
from nltk.corpus import wordnet
syns = wordnet.synsets('stack')
for s in syns:
for l in s.lemmas():
print( l.name() + " " + str(l.count()))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With