The following code run Naive Bayes movie review classifier. The code generate a list of the most informative features.
Note: **movie review**
folder is in the nltk
.
from itertools import chain
from nltk.corpus import stopwords
from nltk.probability import FreqDist
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import movie_reviews
stop = stopwords.words('english')
documents = [([w for w in movie_reviews.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in movie_reviews.fileids()]
word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]
numtrain = int(len(documents) * 90 / 100)
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]]
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[numtrain:]]
classifier = NaiveBayesClassifier.train(train_set)
print nltk.classify.accuracy(classifier, test_set)
classifier.show_most_informative_features(5)
link of code from alvas
how can I test the classifier on specific file?
Please let me know if my question is ambiguous or wrong.
First, read these answers carefully, they contain parts of the answers you require and also briefly explains what the classifier does and how it works in NLTK:
Testing classifier on annotated data
Now to answer your question. We assume that your question is a follow-up of this question: Using my own corpus instead of movie_reviews corpus for Classification in NLTK
If your test text is structured the same way as the movie_review
corpus, then you can simply read the test data as you would for the training data:
Just in case the explanation of the code is unclear, here's a walkthrough:
traindir = '/home/alvas/my_movie_reviews'
mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
The two lines above is to read a directory my_movie_reviews
with such a structure:
\my_movie_reviews
\pos
123.txt
234.txt
\neg
456.txt
789.txt
README
Then the next line extracts documents with its pos/neg
tag that's part of the directory structure.
documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
Here's the explanation for the above line:
# This extracts the pos/neg tag
labels = [i for i.split('/')[0]) for i in mr.fileids()]
# Reads the words from the corpus through the CategorizedPlaintextCorpusReader object
words = [w for w in mr.words(i)]
# Removes the stopwords
words = [w for w in mr.words(i) if w.lower() not in stop]
# Removes the punctuation
words = [w for w in mr.words(i) w not in string.punctuation]
# Removes the stopwords and punctuations
words = [w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation]
# Removes the stopwords and punctuations and put them in a tuple with the pos/neg labels
documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
The SAME process should be applied when you read the test data!!!
Now to the feature processing:
The following lines extra top 100 features for the classifier:
# Extract the words features and put them into FreqDist
# object which records the no. of times each unique word occurs
word_features = FreqDist(chain(*[i for i,j in documents]))
# Cuts the FreqDist to the top 100 words in terms of their counts.
word_features = word_features.keys()[:100]
Next to processing the documents into classify-able format:
# Splits the training data into training size and testing size
numtrain = int(len(documents) * 90 / 100)
# Process the documents for training data
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]]
# Process the documents for testing data
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[numtrain:]]
Now to explain that long list comprehension for train_set
and `test_set:
# Take the first `numtrain` no. of documents
# as training documents
train_docs = documents[:numtrain]
# Takes the rest of the documents as test documents.
test_docs = documents[numtrain:]
# These extract the feature sets for the classifier
# please look at the full explanation on https://stackoverflow.com/questions/20827741/nltk-naivebayesclassifier-training-for-sentiment-analysis/
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in train_docs]
You need to process the documents as above for the feature extractions in the test documents too!!!
So here's how you can read the test data:
stop = stopwords.words('english')
# Reads the training data.
traindir = '/home/alvas/my_movie_reviews'
mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
# Converts training data into tuples of [(words,label), ...]
documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
# Now do the same for the testing data.
testdir = '/home/alvas/test_reviews'
mr_test = CategorizedPlaintextCorpusReader(testdir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
# Converts testing data into tuples of [(words,label), ...]
test_documents = [([w for w in mr_test.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr_test.fileids()]
Then continue with the processing steps described above, and simply do this to get the label for the test document as @yvespeirsman answered:
#### FOR TRAINING DATA ####
stop = stopwords.words('english')
# Reads the training data.
traindir = '/home/alvas/my_movie_reviews'
mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
# Converts training data into tuples of [(words,label), ...]
documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
# Extract training features.
word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]
# Assuming that you're using full data set
# since your test set is different.
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents]
#### TRAINS THE TAGGER ####
# Train the tagger
classifier = NaiveBayesClassifier.train(train_set)
#### FOR TESTING DATA ####
# Now do the same reading and processing for the testing data.
testdir = '/home/alvas/test_reviews'
mr_test = CategorizedPlaintextCorpusReader(testdir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
# Converts testing data into tuples of [(words,label), ...]
test_documents = [([w for w in mr_test.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr_test.fileids()]
# Reads test data into features:
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in test_documents]
#### Evaluate the classifier ####
for doc, gold_label in test_set:
tagged_label = classifier.classify(doc)
if tagged_label == gold_label:
print("Woohoo, correct")
else:
print("Boohoo, wrong")
If the above code and explanation makes no sense to you, then you MUST read this tutorial before proceeding: http://www.nltk.org/howto/classify.html
Now let's say you have no annotation in your test data, i.e. your test.txt
is not in the directory structure like the movie_review
and just a plain textfile:
\test_movie_reviews
\1.txt
\2.txt
Then there's no point in reading it into a categorized corpus, you can simply do read and tag the documents, i.e.:
for infile in os.listdir(`test_movie_reviews):
for line in open(infile, 'r'):
tagged_label = classifier.classify(doc)
BUT you CANNOT evaluate the results without annotation, so you can't check the tag if the if-else
, also you need to tokenize your text if you're not using the CategorizedPlaintextCorpusReader.
If you just want to tag a plaintext file test.txt
:
import string
from itertools import chain
from nltk.corpus import stopwords
from nltk.probability import FreqDist
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import movie_reviews
from nltk import word_tokenize
stop = stopwords.words('english')
# Extracts the documents.
documents = [([w for w in movie_reviews.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in movie_reviews.fileids()]
# Extract the features.
word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]
# Converts documents to features.
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents]
# Train the classifier.
classifier = NaiveBayesClassifier.train(train_set)
# Tag the test file.
with open('test.txt', 'r') as fin:
for test_sentence in fin:
# Tokenize the line.
doc = word_tokenize(test_sentence.lower())
featurized_doc = {i:(i in doc) for i in word_features}
tagged_label = classifier.classify(featurized_doc)
print(tagged_label)
Once again, please don't just copy and paste the solution and try to understand why and how it works.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With