Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to engineer features for machine learning [closed]

Do you have some advices or reading how to engineer features for a machine learning task? Good input features are important even for a neural network. The chosen features will affect the needed number of hidden neurons and the needed number of training examples.

The following is an example problem, but I'm interested in feature engineering in general.

A motivation example: What would be a good input when looking at a puzzle (e.g., 15-puzzle or Sokoban)? Would it be possible to recognize which of two states is closer to the goal?

like image 758
Ivo Danihelka Avatar asked Apr 20 '10 10:04

Ivo Danihelka


People also ask

What are the 2 steps of feature engineering?

Feature engineering in ML consists of four main steps: Feature Creation, Transformations, Feature Extraction, and Feature Selection. Feature engineering consists of creation, transformation, extraction, and selection of features, also known as variables, that are most conducive to creating an accurate ML algorithm.

Why is feature engineering difficult?

Feature engineering requires deep technical skills, detailed knowledge of data engineering, and the way ML algorithms work. It demands a specific skillset including programming and understanding how to work with databases. Most feature engineering techniques require Python coding skills.


1 Answers

Good feature engineering involves two components. The first is an understanding the properties of the task you're trying to solve and how they might interact with the strengths and limitations of the classifier you're using. The second is experimental work where you will be testing your expectations and find out what actually works and what doesn't.

This can be done iteratively: Your top down understanding of the problem motivates experiments, and then the bottom up information you learn for those experiments helps you obtain a better understanding of the problem. The deeper understanding of the problem can then drive more experiments.

Fitting Features to Your Classifier

Let’s say you're using a simple linear classifier like logistic-regression or a SVM with a linear kernel. If you think there might be interesting interactions between various attributes you can measure and provide as input to the classifier, you'll need to manually construct and provide features that capture those interactions. However, if you're using a SVM with a polynomial or Gaussian kernel, interactions between the input variables will already be captured by the structure of the model.

Similarly, SVMs can perform poorly if some input variables take on a much larger range of values than others (e.g., most features take on a value of 0 or 1, but one feature takes on values between -1000 and 1000). So, when you’re doing feature engineering for a SVM, you might want to try normalizing the values of your features before providing them to the classifier. However, if you're using decision trees or random forests, such normalization isn't necessary, as these classifiers are robust to differences in magnitude between the values that various features take on.

Notes Specifically on Puzzle Solving

If you're looking at solving a problem with a complex state space, you might want to use a reinforcement learning approach like Q-learning. This helps structure learning tasks that involve reaching some goal by a series of intermediate steps by the system.

like image 124
dmcer Avatar answered Oct 04 '22 03:10

dmcer