Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to include words as numerical feature in classification

Whats the best method to use the words itself as the features in any machine learning algorithm ?

The problem I have to extract word related feature from a particular paragraph. Should I use the index in the dictionary as the numerical feature ? If so, how will I normalize these ?

In general, How are words itself used as features in NLP ?

like image 830
AlgoMan Avatar asked Nov 17 '10 17:11

AlgoMan


People also ask

How do you combine textual and numerical features of machine learning?

To combine text features and numerical features follow this: For Numerical Features , use Normalisation or Column Standardization to scale the numerical data. If in case you also want to use Categorical Features, then use OneHotEncoding, LabelEncoding, ResponseCoding etc , to vectorise the Categorical Features.

How do you classify text into categories?

Rule-based approaches classify text into organized groups by using a set of handcrafted linguistic rules. These rules instruct the system to use semantically relevant elements of a text to identify relevant categories based on its content. Each rule consists of an antecedent or pattern and a predicted category.

How do you label data for text classification?

A good approach to label text is defining clear rules of what should receive which label. Once you do a list of rules, be consistent. If you classify profanity as negative, don't label the other half of the dataset as positive if they contain profanity.


2 Answers

There are several conventional techniques by which words are mapped to features (columns in a 2D data matrix in which the rows are the individual data vectors) for input to machine learning models.classification:

  • a Boolean field which encodes the presence or absence of that word in a given document;

  • a frequency histogram of a predetermined set of words, often the X most commonly occurring words from among all documents comprising the training data (more about this one in the last paragraph of this Answer);

  • the juxtaposition of two or more words (e.g., 'alternative' and 'lifestyle' in consecutive order have a meaning not related either component word); this juxtaposition can either be captured in the data model itself, eg, a boolean feature that represents the presence or absence of two particular words directly adjacent to one another in a document, or this relationship can be exploited in the ML technique, as a naive Bayesian classifier would do in this instanceemphasized text;

  • words as raw data to extract latent features, eg, LSA or Latent Semantic Analysis (also sometimes called LSI for Latent Semantic Indexing). LSA is a matrix decomposition-based technique which derives latent variables from the text not apparent from the words of the text itself.

A common reference data set in machine learning is comprised of frequencies of 50 or so of the most common words, aka "stop words" (e.g., a, an, of, and, the, there, if) for published works of Shakespeare, London, Austen, and Milton. A basic multi-layer perceptron with a single hidden layer can separate this data set with 100% accuracy. This data set and variations on it are widely available in ML Data Repositories and academic papers presenting classification results are likewise common.

like image 87
doug Avatar answered Nov 15 '22 09:11

doug


Standard approach is the "bag-of-words" representation where you have one feature per word, giving "1" if the word occurs in the document and "0" if it doesn't occur.

This gives lots of features, but if you have a simple learner like Naive Bayes, that's still OK.

"Index in the dictionary" is a useless feature, I wouldn't use it.

like image 34
Yaroslav Bulatov Avatar answered Nov 15 '22 09:11

Yaroslav Bulatov