After I have built a model with GridSearchCV, I get the cross validation results with model.cv_results_. But among the results one parameter is confusing to me. What does rank_test_score stand for in this?
mean_fit_time 0.00265972
std_fit_time 0.000466648
mean_score_time 0.00133236
std_score_time 0.000470977
param_n_neighbors 1
param_weights distance
params {'n_neighbors': 1, 'weights': 'distance'}
split0_test_score 0.70405
split1_test_score 0.73125
split2_test_score 0.69906
mean_test_score 0.711458
std_test_score 0.0141423
rank_test_score 1
split0_train_score 1
split1_train_score 1
split2_train_score 1
mean_train_score 1
std_train_score 0
Name: 1, dtype: object
rank_test_score indicates the rank of a grid search parameter combination based on the mean_test_score . If you try N parameter combinations in your grid search, rank_test_score reaches from 1 to N.
cv: number of cross-validation you have to try for each selected set of hyperparameters. verbose: you can set it to 1 to get the detailed print out while you fit the data to GridSearchCV.
Grid-search is used to find the optimal hyperparameters of a model which results in the most 'accurate' predictions.
Grid Search cross-validation is a technique to select the best of the machine learning model, parameterized by a grid of hyperparameters. Scikit-Learn library comes with grid search cross-validation implementation.
rank_test_score
indicates the rank of a grid search parameter combination based on the mean_test_score
.
If you try N parameter combinations in your grid search, rank_test_score
reaches from 1 to N.
The parameter combination that results in the lowest mean_test_score
will have a rank_test_score
of N and the parameter combination with the highest mean_test_score
will have a rank_test_score
of 1.
If you use multiple metrics for evaluation (say, 'neg_mean_squared_error' and 'neg_mean_absolute_error') you will have more columns (here rank_test_neg_mean_squared_error
and rank_test_neg_mean_absolute_error
), each indicating the rank of the estimator based on the respective metric.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With