Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Should Feature Selection be done before Train-Test Split or after?

Actually, there is a contradiction of 2 facts that are the possible answers to the question:

  1. The conventional answer is to do it after splitting as there can be information leakage, if done before, from the Test-Set.

  2. The contradicting answer is that, if only the Training Set chosen from the whole dataset is used for Feature Selection, then the feature selection or feature importance score orders is likely to be dynamically changed with change in random_state of the Train_Test_Split. And if the feature selection for any particular work changes, then no Generalization of Feature Importance can be done, which is not desirable. Secondly, if only Training Set is used for feature selection, then the test set may contain certain set of instances that defies/contradicts the feature selection done only on the Training Set as the overall historical data is not analyzed. Moreover, feature importance scores can only be evaluated when, given a set of instances rather than a single test/unknown instance.

like image 441
Navoneel Chakrabarty Avatar asked May 25 '19 19:05

Navoneel Chakrabarty


People also ask

Why feature selection is used before?

Top reasons to use feature selection are: It enables the machine learning algorithm to train faster. It reduces the complexity of a model and makes it easier to interpret. It improves the accuracy of a model if the right subset is chosen.

Should encoding be done before or after train test split?

In 'Categorical Variables' exercise in Intermediate Machine Learning course the label encoding of categorical variables is performed after train-test split of the dataset hence there is a situation where the test data contains values that don't also appear in the training data, the encoder will throw an error, because ...

When should I do feature selection?

The aim of feature selection is to maximize relevance and minimize redundancy. Feature selection methods can be used in data pre-processing to achieve efficient data reduction. This is useful for finding accurate data models.

Should I do feature selection before splitting test?

The conventional answer is to do it after splitting as there can be information leakage, if done before, from the Test-Set.


1 Answers

The conventional answer #1 is correct here; the arguments in the contradicting answer #2 do not actually hold.

When having such doubts, it is useful to imagine that you simply do not have any access in any test set during the model fitting process (which includes feature importance); you should treat the test set as literally unseen data (and, since unseen, they could not have been used for feature importance scores).

Hastie & Tibshirani have clearly argued long ago about the correct & wrong way to perform such processes; I have summarized the issue in a blog post, How NOT to perform feature selection! - and although the discussion is about cross-validation, it can be easily seen that the arguments hold for the case of train/test split, too.

The only argument that actually holds in your contradicting answer #2 is that

the overall historical data is not analyzed

Nevertheless, this is the necessary price to pay in order to have an independent test set for performance assessment, otherwise, with the same logic, we should use the test set for training, too, shouldn't we?


Wrap up: the test set is there solely for performance assessment of your model, and it should not be used in any stage of model building, including feature selection.

UPDATE (after comments):

the trends in the Test Set may be different

A standard (but often implicit) assumption here is that the training & test sets are qualitatively similar; it is exactly due to this assumption that we feel OK to just use simple random splits to get them. If we have reasons to believe that our data change in significant ways (not only between train & test, but during model deployment, too), the whole rationale breaks down, and completely different approaches are required.

Also, on doing so, there can be a high probability of Over-fitting

The only certain way of overfitting is to use the test set in any way during the pipeline (including for feature selection, as you suggest). Arguably, the linked blog post has enough arguments (including quotes & links) to be convincing. Classic example, the testimony in The Dangers of Overfitting or How to Drop 50 spots in 1 minute:

as the competition went on, I began to use much more feature selection and preprocessing. However, I made the classic mistake in my cross-validation method by not including this in the cross-validation folds (for more on this mistake, see this short description or section 7.10.2 in The Elements of Statistical Learning). This lead to increasingly optimistic cross-validation estimates.

As I have already said, although the discussion here is about cross-validation, it should not be difficult to convince yourself that it perfectly applies to the train/test case, too.

feature selection should be done in such a way that Model Performance is enhanced

Well, nobody can argue with this, of course! The catch is - which exact performance are we talking about? Because the Kaggler quoted above was indeed getting better "performance" as he was going along (applying a mistaken procedure), until his model was faced with real unseen data (the moment of truth!), and it unsurprisingly flopped.

Admittedly, this is not trivial stuff, and it may take some time until you internalize them (it's no coincidence that, as Hastie & Tibshirani demonstrate, there are even research papers where the procedure is performed wrongly). Until then, my advice to keep you safe, is: during all stages of model building (including feature selection), pretend that you don't have access to the test set at all, and that it becomes available only when you need to assess the performance of your final model.

like image 131
desertnaut Avatar answered Sep 21 '22 12:09

desertnaut