The GridSearchCV use 'scoring' to select best estimator. After train the GridSearchCV, I would like to see the score for each combination. Does GridSearchCV store all scores for each parameter combinations? If it does how to get the scores? Thanks.
Here is an example code that I used in another post.
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.naive_bayes import MultinomialNB
X_train = ['qwe rtyuiop', 'asd fghj kl', 'zx cv bnm', 'qw erty ui op', 'as df ghj kl', 'zxc vb nm', 'qwe rt yu iop', 'asdfg hj kl', 'zx cvb nm',
'qwe rt yui op', 'asd fghj kl', 'zx cvb nm', 'qwer tyui op', 'asd fg hjk l', 'zx cv b nm', 'qw ert yu iop', 'as df gh jkl', 'zx cvb nm',
'qwe rty uiop', 'asd fghj kl', 'zx cvbnm', 'qw erty ui op', 'as df ghj kl', 'zxc vb nm', 'qwe rtyu iop', 'as dfg hj kl', 'zx cvb nm',
'qwe rt yui op', 'asd fg hj kl', 'zx cvb nm', 'qwer tyuiop', 'asd fghjk l', 'zx cv b nm', 'qw ert yu iop', 'as df gh jkl', 'zx cvb nm']
y_train = ['1', '2', '3', '1', '1', '3', '1', '2', '3',
'1', '2', '3', '1', '4', '1', '2', '2', '4',
'1', '2', '3', '1', '1', '3', '1', '2', '3',
'1', '2', '3', '1', '4', '1', '2', '2', '4']
parameters = {
'clf__alpha': (1e-1, 1e-2),
'vect__ngram_range': [(1,2),(1,3)],
'vect__max_df': (0.9, 0.98)
}
text_clf_Pipline_MultinomialNB = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
gs_clf = GridSearchCV(text_clf_Pipline_MultinomialNB, parameters, n_jobs=-1)
gs_classifier = gs_clf.fit(X_train, y_train)
GridSearchCV is a technique to search through the best parameter values from the given set of the grid of parameters. It is basically a cross-validation method. the model and the parameters are required to be fed in. Best parameter values are extracted and then the predictions are made.
GridSearchCV tries all the combinations of the values passed in the dictionary and evaluates the model for each combination using the Cross-Validation method. Hence after using this function we get accuracy/loss for every combination of hyperparameters and we can choose the one with the best performance.
The only difference between both the approaches is in grid search we define the combinations and do training of the model whereas in RandomizedSearchCV the model selects the combinations randomly. Both are very effective ways of tuning the parameters that increase the model generalizability.
Yes it does, exactly as it is stated in the docs:
grid_scores_
: list of named tuplesContains scores for all parameter combinations in param_grid. Each entry corresponds to one parameter setting. Each named tuple has the attributes:
parameters
, a dict of parameter settingsmean_validation_score
, the mean score over the cross-validation foldscv_validation_scores
, the list of scores for each fold
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With