Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to calculate TF*IDF for a single new document to be classified?

I am using document-term vectors to represent a collection of document. I use TF*IDF to calculate the term weight for each document vector. Then I could use this matrix to train a model for document classification.

I am looking forward to classify new document in future. But in order to classify it, I need to turn the document into a document-term vector first, and the vector should be composed of TF*IDF values, too.

My question is, how could I calculate the TF*IDF with just a single document?

As far as I understand, TF can be calculated based on a single document itself, but the IDF can only be calculated with a collection of document. In my current experiment, I actually calculate the TF*IDF value for the whole collection of documents. And then I use some documents as training set and the others as test set.

I just suddenly realized that this seems not so applicable to real life.

ADD 1

So there are actually 2 subtly different scenarios for classification:

  1. to classify some documents whose content are known but label are not known.
  2. to classify some totally unseen document.

For 1, we can combine all the documents, both with and without labels. And get the TF*IDF over all of them. This way, even we only use the documents with labels for training, the training result will still contain the influence of the documents without labels.

But my scenario is 2.

Suppose I have the following information for term T from the summary of the training set corpus:

  • document count for T in the training set is n
  • total number of training documents is N

Should I calculate the IDF of t for a unseen document D as below?

IDF(t, D)= log((N+1)/(n+1))

ADD 2

And what if I encounter a term in the new document which didn't show up in the training corpus before? How should I calculate the weight for it in the doc-term vector?

like image 476
smwikipedia Avatar asked Apr 01 '14 15:04

smwikipedia


People also ask

How do you use TF-IDF in classification?

To calculate term frequency simply count how many times the word shows up in the article of text and then divide that by the total number of words in the text.

How is TF and IDF calculated?

Thus, the term frequency is often divided by the document length (aka. the total number of terms in the document) as a way of normalization: TF(t) = (Number of times term t appears in a document) / (Total number of terms in the document). IDF: Inverse Document Frequency, which measures how important a term is.

What is TF-IDF formula?

Formula : idf(t) = log(N/(df + 1)) tf-idf now is a the right measure to evaluate how important a word is to a document in a collection or corpus.

What is the TF-IDF value in a document?

TF-IDF stands for term frequency-inverse document frequency and it is a measure, used in the fields of information retrieval (IR) and machine learning, that can quantify the importance or relevance of string representations (words, phrases, lemmas, etc) in a document amongst a collection of documents (also known as a ...


3 Answers

TF-IDF doesn't make sense for a single document, independent of a corpus. It's fundamentally about emphasizing relatively rare and informative words.

You need to keep corpus summary information in order to compute TF-IDF weights. In particular, you need the document count for each term and the total number of documents.

Whether you want to use summary information from the whole training set and test set for TF-IDF, or for just the training set is a matter of your problem formulation. If it's the case that you only care to apply your classification system to documents whose contents you have, but whose labels you do not have (this is actually pretty common), then using TF-IDF for the entire corpus is okay. If you want to apply your classification system to entirely unseen documents after you train, then you only want to use the TF-IDF summary information from the training set.

like image 120
Rob Neuhaus Avatar answered Oct 18 '22 09:10

Rob Neuhaus


TF obviously only depends on the new document.

IDF, you compute only on your training corpus.

You can add a slack term to the IDF computation, or adjust it as you suggested. But for a reasonable training set, the constant +1 term will not have a whole lot of effect. AFAICT, in classic document retrieval (think: search), you don't bother to do this. Often, they query document will not become part of your corpus, so why would it be part of IDF?

like image 26
Has QUIT--Anony-Mousse Avatar answered Oct 18 '22 08:10

Has QUIT--Anony-Mousse


For unseen words, TF calculation is not a problem as TF is a document specific metric. While computing IDF, you can use smoothed inverse document frequency technique.

IDF = 1 + log(total documents / document frequency of a term)

Here the lower bound for IDF is 1. So if a word is not seen in the training corpus, its IDF is 1. Since, there is no universally agreed single formula for computing tf-idf or even idf, your formula for tf-idf calculation is also reasonable.

Note that, in many cases, unseen terms are ignored if they don't have much impact in the classification task. Sometimes, people replace unseen tokens with a special symbol like UNKNOWN_TOKEN and do their computation.

Alternative of TF-IDF: Another way of computing weight of each term of a document is using Maximum Likelihood Estimation. While computing MLE, you can smooth using additive smoothing technique which is also known as Laplace smoothing. MLE is used in case you are using Generative models like Naive Bayes algorithm for document classification.

like image 25
Wasi Ahmad Avatar answered Oct 18 '22 07:10

Wasi Ahmad