Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to get both MSE and R2 from a sklearn GridSearchCV?

I can use a GridSearchCV on a pipeline and specify scoring to either be 'MSE' or 'R2'. I can then access gridsearchcv._best_score to recover the one I specified. How do I also get the other score for the solution found by GridSearchCV?

If I run GridSearchCV again with the other scoring parameter, it might not find the same solution, and so the score it reports might not correspond to the same model as the one for which we have the first value.

Maybe I can extract the parameters and supply them to a new pipeline, and then run cross_val_score with the new pipeline? Is there a better way? Thanks.

like image 564
rhombidodecahedron Avatar asked Aug 04 '14 18:08

rhombidodecahedron


1 Answers

This is unfortunately not straightforward right now with GridSearchCV, or any built in sklearn method/object.

Although there is talk of having multiple scorer outputs, this feature will probably not come soon.

So you will have to do it yourself, there are several ways:

1) You can take a look at the code of cross_val_score and perform the cross validation loop yourself, calling the scorers of interest once each fold is done.

2) [not recommended] You can also build your own scorer out of the scorers you are interested in and have them output the scores as an array. You will then find yourself with the problem explained here: sklearn - Cross validation with multiple scores

3) Since you can code your own scorers, you could make a scorer that outputs one of your scores (the one by which you want GridSearchCV to make decisions), and which stores all the other scores you are interested in in a separate place, which may be a static/global variable, or even a file.

Number 3 seems the least tedious and most promising:

import numpy as np
from sklearn.metrics import r2_score, mean_squared_error
secret_mses = []

def r2_secret_mse(estimator, X_test, y_test):
    predictions = estimator.predict(X_test)
    secret_mses.append(mean_squared_error(y_test, predictions))
    return r2_score(y_test, predictions)

X = np.random.randn(20, 10)
y = np.random.randn(20)

from sklearn.cross_validation import cross_val_score
from sklearn.linear_model import Ridge

r2_scores = cross_val_score(Ridge(), X, y, scoring=r2_secret_mse, cv=5)

You will find the R2 scores in r2_scores and the corresponding MSEs in secret_mses.

Note that this can become messy if you go parallel. In that case you would need to write the scores to a specific place in a memmap for example.

like image 119
eickenberg Avatar answered Oct 05 '22 02:10

eickenberg