How can I extract the best pipeline from a fit GridSearchCV, so that I can pass it on to cross_val_predict?
Directly passing the fit GridSearchCV object causes cross_val_predict to run the whole grid search again, I just want to let the best pipeline be subject to the cross_val_predict evaluation.
My self-contained code is below:
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import SVC
from sklearn.multiclass import OneVsRestClassifier
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
from sklearn.model_selection import cross_val_predict
from sklearn.model_selection import StratifiedKFold
from sklearn import metrics
# fetch data data
newsgroups = fetch_20newsgroups(remove=('headers', 'footers', 'quotes'), categories=['comp.graphics', 'rec.sport.baseball', 'sci.med'])
X = newsgroups.data
y = newsgroups.target
# setup and run GridSearchCV
wordvect = TfidfVectorizer(analyzer='word', lowercase=True)
classifier = OneVsRestClassifier(SVC(kernel='linear', class_weight='balanced'))
pipeline = Pipeline([('vect', wordvect), ('classifier', classifier)])
scoring = 'f1_weighted'
parameters = {
'vect__min_df': [1, 2],
'vect__max_df': [0.8, 0.9],
'classifier__estimator__C': [0.1, 1, 10]
}
gs_clf = GridSearchCV(pipeline, parameters, n_jobs=8, scoring=scoring, verbose=1)
gs_clf = gs_clf.fit(X, y)
### outputs: Fitting 3 folds for each of 12 candidates, totalling 36 fits
# manually extract the best models from the grid search to re-build the pipeline
best_clf = gs_clf.best_estimator_.named_steps['classifier']
best_vectorizer = gs_clf.best_estimator_.named_steps['vect']
best_pipeline = Pipeline([('best_vectorizer', best_vectorizer), ('classifier', best_clf)])
# passing gs_clf here would run the grind search again inside cross_val_predict
y_predicted = cross_val_predict(pipeline, X, y)
print(metrics.classification_report(y, y_predicted, digits=3))
What I am currently doing is manually re-building the pipeline from the best_estimator_. But my pipelines usually have more steps such as SVD or PCA, and sometimes I am adding or removing steps and re-running the grid-searches to explore the data. This step has to be then always repeated below when manually re-building the pipeline, which is error-prone.
Is there a way to directly extract the best pipeline from a fit GridSearchCV, so that I can pass it to cross_val_predict ?
y_predicted = cross_val_predict(gs_clf.best_estimator_, X, y)
works and returns:
Fitting 3 folds for each of 12 candidates, totalling 36 fits
[Parallel(n_jobs=4)]: Done 36 out of 36 | elapsed: 43.6s finished
precision recall f1-score support
0 0.920 0.911 0.916 584
1 0.894 0.943 0.918 597
2 0.929 0.887 0.908 594
avg / total 0.914 0.914 0.914 1775
[Edit] When I tried the code again passing simply pipeline (the original pipeline), it returned the same output (as did passing best_pipeline). So it's possible you could just work with the Pipeline itself, but I'm not 100% on that.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With