Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What to do first: Feature Selection or Model Parameters Setting?

This is more of a theoretical question. I'm working with the scikit-learn package to perform some NLP task. Sklearn provides many methods to perform both feature selection and setting of a model parameters. I'm wondering what I should do first.

If I use univariate feature selection, it's pretty obvious that I should do feature selection first and, with the selected features, I then tunne the parameters of the estimator.

But what if I want to use recursive feature elimination? Should I first set the parameters with grid search using ALL the original features and just then perform feature selection? Or perhaps I should select the features first (with the estimator's default parameters) and then set the parameters with the selected features?

EDIT

I'm having pretty much the same problem stated here. By that time, there wasn't a solution to it. Does anyone know if it exists one now?

like image 835
feralvam Avatar asked Sep 17 '12 15:09

feralvam


People also ask

Should I do feature selection before PCA?

Typically a Feature Selection step comes after the PCA (with a optimization parameter describing the number of features and Scaling comes before PCA. However, depending on the problem this my change. You might want to apply PCA only on a subset of features. Some Algorithms don't require the data to be normalized etc.

Should feature selection be done before Hyperparameter tuning?

You need to create your features and your training set before you train your model, so the first iteration of feature engineering must come before parameter tuning.

When should you do feature selection?

Feature selection methods can be used in data pre-processing to achieve efficient data reduction. This is useful for finding accurate data models. Since an exhaustive search for an optimal feature subset is infeasible in most cases, many search strategies have been proposed in the literature.


1 Answers

Personally I think RFE is overkill and too expensive in most cases. If you want to do feature selection on linear models, use univariate feature selection, for instance with chi2 tests or L1 or L1 + L2 regularized models with grid searched regularization parameter (usually named C or alpha in sklearn models).

For highly non-linear problems with a lot of samples you should try RandomForestClassifier, ExtraTreesClassifier or GBRT models and grid searched parameters selection (possibly using OOB score estimates) and use the compute_importances switch to find a ranking of features by importance and use that for feature selection.

For highly non-linear problems with few samples I don't think there is a solution. You must be doing neurosciences :)

like image 138
ogrisel Avatar answered Oct 10 '22 11:10

ogrisel