Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What's a good measure for classifying text documents?

I have written an application that measures text importance. It takes a text article, splits it into words, drops stopwords, performs stemming, and counts word-frequency and document-frequency. Word-frequency is a measure that counts how many times the given word appeared in all documents, and document-frequency is a measure that counts how many documents the given word appeared.

Here's an example with two text articles:

  • Article I) "A fox jumps over another fox."
  • Article II) "A hunter saw a fox."

Article I gets split into words (afters stemming and dropping stopwords):

  • ["fox", "jump", "another", "fox"].

Article II gets split into words:

  • ["hunter", "see", "fox"].

These two articles produce the following word-frequency and document-frequency counters:

  • fox (word-frequency: 3, document-frequency: 2)
  • jump (word-frequency: 1, document-frequency: 1)
  • another (word-frequency: 1, document-frequency: 1)
  • hunter (word-frequency: 1, document-frequency: 1)
  • see (word-frequency: 1, document-frequency: 1)

Given a new text article, how do I measure how similar this article is to previous articles?

I've read about df-idf measure but it doesn't apply here as I'm dropping stopwords, so words like "a" and "the" don't appear in the counters.

For example, I have a new text article that says "hunters love foxes", how do I come up with a measure that says this article is pretty similar to ones previously seen?

Another example, I have a new text article that says "deer are funny", then this one is a totally new article and similarity should be 0.

I imagine I somehow need to sum word-frequency and document-frequency counter values but what's a good formula to use?

like image 826
bodacydo Avatar asked May 10 '18 02:05

bodacydo


People also ask

What is the best method for text classification?

Linear Support Vector Machine is widely regarded as one of the best text classification algorithms.

How do you classify text data?

Text classification also known as text tagging or text categorization is the process of categorizing text into organized groups. By using Natural Language Processing (NLP), text classifiers can automatically analyze text and then assign a set of pre-defined tags or categories based on its content.

How do you classify documents?

To achieve document classification, we can follow two different methodologies: manual and automatic classification. In manual document classification, as discussed, users manually interpret the meaning of the text and other elements to identify the relationships between concepts and categorize documents.


3 Answers

A standard solution is to apply the Naive Bayes classifier which estimates the posterior probability of a class C given a document D, denoted as P(C=k|D) (for a binary classification problem, k=0 and 1).

This is estimated by computing the priors from a training set of class labeled documents, where given a document D we know its class C.

P(C|D) = P(D|C) * P(D)              (1)

Naive Bayes assumes that terms are independent, in which case you can write P(D|C) as

P(D|C) = \prod_{t \in D} P(t|C)     (2)

P(t|C) can simply be computed by counting how many times does a term occur in a given class, e.g. you expect that the word football will occur a large number of times in documents belonging to the class (category) sports.

When it comes to the other factor P(D), you can estimate it by counting how many labeled documents are given from each class, may be you have more sports articles than finance ones, which makes you believe that there is a higher likelihood of an unseen document to be classified into the sports category.

It is very easy to incorporate factors, such as term importance (idf), or term dependence into Equation (1). For idf, you add it as a term sampling event from the collection (irrespective of the class). For term dependence, you have to plugin probabilities of the form P(u|C)*P(u|t), which means that you sample a different term u and change (transform) it to t.

Standard implementations of Naive Bayes classifier can be found in the Stanford NLP package, Weka and Scipy among many others.

like image 61
Debasis Avatar answered Sep 18 '22 22:09

Debasis


It seems that you are trying to answer several related questions:

  1. How to measure similarity between documents A and B? (Metric learning)
  2. How to measure how unusual document C is, compared to some collection of documents? (Anomaly detection)
  3. How to split a collection of documents into groups of similar ones? (Clustering)
  4. How to predict to which class a document belongs? (Classification)

All of these problems are normally solved in 2 steps:

  1. Extract the features: Document --> Representation (usually a numeric vector)
  2. Apply the model: Representation --> Result (usually a single number)

There are lots of options for both feature engineering and modeling. Here are just a few.

Feature extraction

  1. Bag of words: Document --> number of occurences of each individual word (that is, term frequencies). This is the basic option, but not the only one.
  2. Bag of n-grams (on word-level or character-level): co-occurence of several tokens is taken into account.
  3. Bag of words + grammatic features (e.g. POS tags)
  4. Bag of word embeddings (learned by an external model, e.g. word2vec). You can use embedding as a sequence or take their weighted average.
  5. Whatever you can invent (e.g. rules based on dictionary lookup)...

Features may be preprocessed in order to decrease relative amount of noise in them. Some options for preprocessing are:

  • dividing by IDF, if you don't have a hard list of stop words or believe that words might be more or less "stoppy"
  • normalizing each column (e.g. word count) to have zero mean and unit variance
  • taking logs of word counts to reduce noise
  • normalizing each row to have L2 norm equal to 1

You cannot know in advance which option(s) is(are) best for your specific application - you have to do experiments.

Now you can build the ML model. Each of 4 problems has its own good solutions.

For classification, the best studied problem, you can use multiple kinds of models, including Naive Bayes, k-nearest-neighbors, logistic regression, SVM, decision trees and neural networks. Again, you cannot know in advance which would perform best.

Most of these models can use almost any kind of features. However, KNN and kernel-based SVM require your features to have special structure: representations of documents of one class should be close to each other in sense of Euclidean distance metric. This sometimes can be achieved by simple linear and/or logarithmic normalization (see above). More difficult cases require non-linear transformations, which in principle may be learned by neural networks. Learning of these transformations is something people call metric learning, and in general it is an problem which is not yet solved.

The most conventional distance metric is indeed Euclidean. However, other distance metrics are possible (e.g. manhattan distance), or different approaches, not based on vector representations of texts. For example, you can try to calculate Levenstein distance between texts, based on count of number of operations needed to transform one text to another. Or you can calculate "word mover distance" - the sum of distances of word pairs with closest embeddings.

For clustering, basic options are K-means and DBScan. Both these models require your feature space have this Euclidean property.

For anomaly detection you can use density estimations, which are produced by various probabilistic algorithms: classification (e.g. naive Bayes or neural networks), clustering (e.g. mixture of gaussian models), or other unsupervised methods (e.g. probabilistic PCA). For texts, you can exploit the sequential language structure, estimating probabilitiy of each word conditional on the previous words (using n-grams or convolutional/recurrent neural nets) - this is called language models, and it is usually more efficient than bag-of-word assumption of Naive Bayes, which ignores word order. Several language models (one for each class) may be combined into one classifier.

Whatever problem you solve, it is strongly recommended to have a good test set with the known "ground truth": which documents are close to each other, or belong to the same class, or are (un)usual. With this set, you can evaluate different approaches to feature engineering and modelling, and choose the best one.

If you don't have resourses or willingness to do multiple experiments, I would recommend to choose one of the following approaches to evaluate similarity between texts:

  • word counts + idf normalization + L2 normalization (equivalent to the solution of @mcoav) + Euclidean distance
  • mean word2vec embedding over all words in text (the embedding dictionary may be googled up and downloaded) + Euclidean distance

Based on one of these representations, you can build models for the other problems - e.g. KNN for classifications or k-means for clustering.

like image 24
David Dale Avatar answered Sep 18 '22 22:09

David Dale


I would suggest tf-idf and cosine similarity.

You can still use tf-idf if you drop out stop-words. It is even probable that whether you include stop-words or not would not make such a difference: the Inverse Document Frequency measure automatically downweighs stop-words since they are very frequent and appear in most documents.

If your new document is entirely made of unknown terms, the cosine similarity will be 0 with every known document.

like image 21
mcoav Avatar answered Sep 19 '22 22:09

mcoav