Before scikit-learn 0.20 we could use result.grid_scores_[result.best_index_]
to get the standard deviation. (It returned for exemple: mean: 0.76172, std: 0.05225, params: {'n_neighbors': 21}
)
What's the best way in scikit-learn 0.20 to get the standard deviation of the best score ?
In newer versions, the grid_scores_
is renamed as cv_results_
. Following the documentation, you need this:
best_index_ : int The index (of the cv_results_ arrays) which corresponds to the best > candidate parameter setting. The dict at search.cv_results_['params'][search.best_index_] gives the > parameter setting for the best model, that gives the highest mean score (search.best_score_).
So in your case, you need
result.cv_results_['params'][result.best_index_]
OR result.best_params_
Best mean score :- result.cv_results_['mean_test_score'][result.best_index_]
OR result.best_score_
Best std :- result.cv_results_['std_test_score'][result.best_index_]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With