Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between cross_val_score and cross_val_predict

I want to evaluate a regression model build with scikitlearn using cross-validation and getting confused, which of the two functions cross_val_score and cross_val_predict I should use. One option would be :

cvs = DecisionTreeRegressor(max_depth = depth) scores = cross_val_score(cvs, predictors, target, cv=cvfolds, scoring='r2') print("R2-Score: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2)) 

An other one, to use the cv-predictions with the standard r2_score:

cvp = DecisionTreeRegressor(max_depth = depth) predictions = cross_val_predict(cvp, predictors, target, cv=cvfolds) print ("CV R^2-Score: {}".format(r2_score(df[target], predictions_cv))) 

I would assume that both methods are valid and give similar results. But that is only the case with small k-folds. While the r^2 is roughly the same for 10-fold-cv, it gets increasingly lower for higher k-values in the case of the first version using "cross_vall_score". The second version is mostly unaffected by changing numbers of folds.

Is this behavior to be expected and do I lack some understanding regarding CV in SKLearn?

like image 625
Bobipuegi Avatar asked Apr 25 '17 14:04

Bobipuegi


People also ask

What is the difference between Cross_val_score and cross_val_predict?

cross_val_score returns score of test fold where cross_val_predict returns predicted y values for the test fold. For the cross_val_score() , you are using the average of the output, which will be affected by the number of folds because then it may have some folds which may have high error (not fit correctly).

What is the difference between Cross_validate and Cross_val_score?

The cross_validate function differs from cross_val_score in two ways: It allows specifying multiple metrics for evaluation. It returns a dict containing fit-times, score-times (and optionally training scores as well as fitted estimators) in addition to the test score.

What is cross_val_predict?

The function cross_val_predict has a similar interface to cross_val_score, but returns, for each element in the input, the prediction that was obtained for that element when it was in the test set.

What is Cross_val_score?

Cross_val_score is a method which runs cross validation on a dataset to test whether the model can generalise over the whole dataset. The function returns a list of one score per split, and the average of these scores can be calculated to provide a single metric value for the dataset.


1 Answers

cross_val_score returns score of test fold where cross_val_predict returns predicted y values for the test fold.

For the cross_val_score(), you are using the average of the output, which will be affected by the number of folds because then it may have some folds which may have high error (not fit correctly).

Whereas, cross_val_predict() returns, for each element in the input, the prediction that was obtained for that element when it was in the test set. [Note that only cross-validation strategies that assign all elements to a test set exactly once can be used]. So the increasing the number of folds, only increases the training data for the test element, and hence its result may not be affected much.

Edit (after comment)

Please have a look the following answer on how cross_val_predict works:

How is scikit-learn cross_val_predict accuracy score calculated?

I think that cross_val_predict will be overfit because as the folds increase, more data will be for train and less will for test. So the resultant label is more dependent on training data. Also as already told above, the prediction for one sample is done only once, so it may be susceptible to the splitting of data more. Thats why most of the places or tutorials recommend using the cross_val_score for analysis.

like image 170
Vivek Kumar Avatar answered Oct 16 '22 23:10

Vivek Kumar