I can use a GridSearchCV on a pipeline and specify scoring to either be 'MSE'
or 'R2'
. I can then access gridsearchcv._best_score
to recover the one I specified. How do I also get the other score for the solution found by GridSearchCV?
If I run GridSearchCV again with the other scoring parameter, it might not find the same solution, and so the score it reports might not correspond to the same model as the one for which we have the first value.
Maybe I can extract the parameters and supply them to a new pipeline, and then run cross_val_score
with the new pipeline? Is there a better way? Thanks.
This is unfortunately not straightforward right now with GridSearchCV
, or any built in sklearn method/object.
Although there is talk of having multiple scorer outputs, this feature will probably not come soon.
So you will have to do it yourself, there are several ways:
1) You can take a look at the code of cross_val_score
and perform the cross validation loop yourself, calling the scorers of interest once each fold is done.
2) [not recommended] You can also build your own scorer out of the scorers you are interested in and have them output the scores as an array. You will then find yourself with the problem explained here: sklearn - Cross validation with multiple scores
3) Since you can code your own scorers, you could make a scorer that outputs one of your scores (the one by which you want GridSearchCV
to make decisions), and which stores all the other scores you are interested in in a separate place, which may be a static/global variable, or even a file.
Number 3 seems the least tedious and most promising:
import numpy as np
from sklearn.metrics import r2_score, mean_squared_error
secret_mses = []
def r2_secret_mse(estimator, X_test, y_test):
predictions = estimator.predict(X_test)
secret_mses.append(mean_squared_error(y_test, predictions))
return r2_score(y_test, predictions)
X = np.random.randn(20, 10)
y = np.random.randn(20)
from sklearn.cross_validation import cross_val_score
from sklearn.linear_model import Ridge
r2_scores = cross_val_score(Ridge(), X, y, scoring=r2_secret_mse, cv=5)
You will find the R2 scores in r2_scores
and the corresponding MSEs in secret_mses
.
Note that this can become messy if you go parallel. In that case you would need to write the scores to a specific place in a memmap for example.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With