Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does GridSearchCV perform cross-validation?

Tags:

I'm currently working on a problem which compares three different machine learning algorithms performance on the same data-set. I divided the data-set into 70/30 training/testing sets and then performed grid search for the best parameters of each algorithm using GridSearchCV and X_train, y_train.

First question, am I suppose to perform grid search on the training set or is it suppose to be on the whole data-set?

Second question, I know that GridSearchCV uses K-fold in its' implementation, does it mean that I performed cross-validation if I used the same X_train, y_train for all three algorithms I compare in the GridSearchCV?

Any answer would be appreciated, thank you.

like image 413
kevinH Avatar asked Mar 07 '18 19:03

kevinH


People also ask

What is the use of GridSearchCV?

GridSearchCV is a technique to search through the best parameter values from the given set of the grid of parameters. It is basically a cross-validation method. the model and the parameters are required to be fed in. Best parameter values are extracted and then the predictions are made.

What is the difference between K fold and cross-validation?

cross_val_score is a function which evaluates a data and returns the score. On the other hand, KFold is a class, which lets you to split your data to K folds.

What is difference between RandomSearchCV and GridSearchCV?

GridSearchCV on the other hand, are widely different. Depending on the n_iter chosen, RandomSearchCV can be two, three, four times faster than GridSearchCV. However, the higher the n_iter chosen, the lower will be the speed of RandomSearchCV and the closer the algorithm will be to GridSearchCV.


Video Answer


2 Answers

All estimators in scikit where name ends with CV perform cross-validation. But you need to keep a separate test set for measuring the performance.

So you need to split your whole data to train and test. Forget about this test data for a while.

And then pass this train data only to grid-search. GridSearch will split this train data further into train and test to tune the hyper-parameters passed to it. And finally fit the model on the whole train data with best found parameters.

Now you need to test this model on the test data you kept aside in the beginning. This will give you the near real world performance of model.

If you use the whole data into GridSearchCV, then there would be leakage of test data into parameter tuning and then the final model may not perform that well on newer unseen data.

You can look at my other answers which describe the GridSearch in more detail:

  • Model help using Scikit-learn when using GridSearch
  • scikit-learn GridSearchCV with multiple repetitions
like image 124
Vivek Kumar Avatar answered Sep 27 '22 17:09

Vivek Kumar


Yes, GridSearchCV performs cross-validation. If I understand the concept correctly - you want to keep part of your data set unseen for the model in order to test it.

So you train your models against train data set and test them on a testing data set.

Here I was doing almost the same - you might want to check it...

like image 40
MaxU - stop WAR against UA Avatar answered Sep 27 '22 17:09

MaxU - stop WAR against UA