Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Length normalization in a naive Bayes classifier for documents

I'm trying to implement a naive Bayes classifier to classify documents that are essentially sets (as opposed to bags) of features, i.e. each feature contains a set of unique features, each of which can appear at most once in the document. For example, you can think of the features as unique keywords for documents.

I've closely followed the Rennie, et. al. paper at http://www.aaai.org/Papers/ICML/2003/ICML03-081.pdf, but I am running into a problem that doesn't seem to be addressed. Namely, classifying short documents are resulting in much higher posterior probabilities due to the documents having a smaller number of features; vice versa for long documents.

This is because the posterior probabilities are defined as (ignoring the denominator):

P(class|document) = P(class) * P(document|class)

which expands to

P(class|document) = P(class) * P(feature1|class) * ... * P(featureK|class)

From that, it's clear that short documents with fewer features will have higher posterior probabilities simply because there are fewer terms to multiply together.

For example, suppose the features "foo", "bar", and "baz" all show up in positive training observations. Then, a document with single feature "foo" will have a higher posterior probability of being classified in the positive class than a document with features {"foo", "bar", "baz"}. This seems counter-intuitive, but I'm not quite sure how to solve this.

Is there some sort of length normalization that can be done? One idea is to add the size of the document as a feature, but that doesn't seem quite right since results would then be skewed by the size of documents in the training data.

like image 631
pmc255 Avatar asked Oct 11 '22 01:10

pmc255


1 Answers

This is a good question; Now I'm not completely sure that there is a problem here. The posterior probability is simply giving you the probability of each class given a document (that is, the probabilities of each document's class). So when classifying a document you are only comparing the posteriors given the same document and so the number of features would not change (since you are not going across documents), that is:

P(class1|document) = P(class1) * P(feature1|class1) * ... * P(featureK|class1)
...
P(classN|document) = P(classN) * P(feature1|classN) * ... * P(featureK|classN)

The class with the highest posterior will the called the label for the document. So since the number of features seem to depend on the document and not the class, there should be no need to normalize.

Am I missing something? If you would want to do something more than classify, e.g. want to compare the most likely documents of a particular class then you would have to use the actual definition of posterior probabilities:

P(class1|document) = P(class1) * P(feature1|class1) * ... * P(featureK|class1)/Sum_over_all_numerators

And this would normalize correctly across documents of varying feature lengths.

like image 96
Junier Avatar answered Oct 14 '22 01:10

Junier