I'm trying to get the best set of parameters for an SVR model.
I'd like to use the GridSearchCV
over different values of C
.
However, from previous test I noticed that the split into Training/Test set higlhy influence the overall performance (r2 in this instance).
To address this problem, I'd like to implement a repeated 5-fold cross validation (10 x 5CV). Is there a built in way of performing it using GridSearchCV
?
QUICK SOLUTION:
Following the idea presented in the sci-kit offical documentation , a quick solution is represented by:
NUM_TRIALS = 10
scores = []
for i in range(NUM_TRIALS):
cv = KFold(n_splits=5, shuffle=True, random_state=i)
clf = GridSearchCV(estimator=svr, param_grid=p_grid, cv=cv)
scores.append(clf.best_score_)
print "Average Score: {0} STD: {1}".format(numpy.mean(scores), numpy.std(scores))
The only difference between both the approaches is in grid search we define the combinations and do training of the model whereas in RandomizedSearchCV the model selects the combinations randomly. Both are very effective ways of tuning the parameters that increase the model generalizability.
best_score_ is the average of all cv folds for a single combination of the parameters you specify in the tuned_params . In order to access other relevant details about the grid searching process, you can look at the grid.
By default no shuffling occurs, including for the (stratified) K fold cross- validation performed by specifying cv=some_integer to cross_val_score, grid search, etc. Keep in mind that train_test_split still returns a random split.
param_grid – A dictionary with parameter names as keys and lists of parameter values. 3. scoring – The performance measure. For example, 'r2' for regression models, 'precision' for classification models.
This is called as nested cross_validation. You can look at official documentation example to guide you into right direction and also have a look at my other answer here for a similar approach.
You can adapt the steps to suit your need:
svr = SVC(kernel="rbf")
c_grid = {"C": [1, 10, 100, ... ]}
# CV Technique "LabelKFold", "LeaveOneOut", "LeaveOneLabelOut", etc.
# To be used within GridSearch (5 in your case)
inner_cv = KFold(n_splits=5, shuffle=True, random_state=i)
# To be used in outer CV (you asked for 10)
outer_cv = KFold(n_splits=10, shuffle=True, random_state=i)
# Non_nested parameter search and scoring
clf = GridSearchCV(estimator=svr, param_grid=c_grid, cv=inner_cv)
clf.fit(X_iris, y_iris)
non_nested_score = clf.best_score_
# Pass the gridSearch estimator to cross_val_score
# This will be your required 10 x 5 cvs
# 10 for outer cv and 5 for gridSearch's internal CV
clf = GridSearchCV(estimator=svr, param_grid=c_grid, cv=inner_cv)
nested_score = cross_val_score(clf, X=X_iris, y=y_iris, cv=outer_cv).mean()
Edit - Description of nested cross validation with cross_val_score()
and GridSearchCV()
clf, X, y, outer_cv
to cross_val_score
X
will be divided into X_outer_train, X_outer_test
using outer_cv
. Same for y.X_outer_test
will be held back and X_outer_train
will be passed on to clf for fit() (GridSearchCV in our case). Assume X_outer_train
is called X_inner
from here on since it is passed to inner estimator, assume y_outer_train
is y_inner
.X_inner
will now be split into X_inner_train
and X_inner_test
using inner_cv
in the GridSearchCV. Same for yX_inner_train
and y_train_inner
and scored using X_inner_test
and y_inner_test
.(X_inner_train, X_inner_test)
is best, is passed on to the clf.best_estimator_
and fitted for all data, i.e. X_outer_train
.clf
(gridsearch.best_estimator_
) will then be scored using X_outer_test
and y_outer_test
.cross_val_score
nested_score
.You can supply different cross-validation generators to GridSearchCV
. The default for binary or multiclass classification problems is StratifiedKFold
. Otherwise, it uses KFold
. But you can supply your own. In your case, it looks like you want RepeatedKFold
or RepeatedStratifiedKFold
.
from sklearn.model_selection import GridSearchCV, RepeatedStratifiedKFold
# Define svr here
...
# Specify cross-validation generator, in this case (10 x 5CV)
cv = RepeatedKFold(n_splits=5, n_repeats=10)
clf = GridSearchCV(estimator=svr, param_grid=p_grid, cv=cv)
# Continue as usual
clf.fit(...)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With