Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using sklearn cross_val_score and kfolds to fit and help predict model

I'm trying to understand using kfolds cross validation from the sklearn python module.

I understand the basic flow:

  • instantiate a model e.g. model = LogisticRegression()
  • fitting the model e.g. model.fit(xtrain, ytrain)
  • predicting e.g. model.predict(ytest)
  • use e.g. cross val score to test the fitted model accuracy.

Where i'm confused is using sklearn kfolds with cross val score. As I understand it the cross_val_score function will fit the model and predict on the kfolds giving you an accuracy score for each fold.

e.g. using code like this:

kf = KFold(n=data.shape[0], n_folds=5, shuffle=True, random_state=8)
lr = linear_model.LogisticRegression()
accuracies = cross_val_score(lr, X_train,y_train, scoring='accuracy', cv = kf)

So if I have a dataset with training and testing data, and I use the cross_val_score function with kfolds to determine the accuracy of the algorithm on my training data for each fold, is the model now fitted and ready for prediction on the testing data? So in the case above using lr.predict

like image 615
hselbie Avatar asked Feb 16 '17 02:02

hselbie


People also ask

What does sklearn Model_selection Cross_val_score do?

cross_val_score. Evaluate a score by cross-validation.

What is the difference between Cross_validate and Cross_val_score?

The cross_validate function differs from cross_val_score in two ways: It allows specifying multiple metrics for evaluation. It returns a dict containing fit-times, score-times (and optionally training scores as well as fitted estimators) in addition to the test score.

How do you predict cross-validation?

Cross-validation in your case would build k estimators (assuming k-fold CV) and then you could check the predictive power and variance of the technique on your data as following: mean of the quality measure. Higher, the better. standard_deviation of the quality measure.

How is Cross_val_score calculated?

"cross_val_score" splits the data into say 5 folds. Then for each fold it fits the data on 4 folds and scores the 5th fold. Then it gives you the 5 scores from which you can calculate a mean and variance for the score. You crossval to tune parameters and get an estimate of the score.


1 Answers

No the model is not fitted. Looking at the source code for cross_val_score:

scores=parallel(delayed(_fit_and_score)(clone(estimator),X,y,scorer,
                                        train,test,verbose,None,fit_params)

As you can see, cross_val_score clones the estimator before fitting the fold training data to it. cross_val_score will give you output an array of scores which you can analyse to know how the estimator performs for different folds of the data to check if it overfits the data or not. You can know more about it here

You need to fit the whole training data to the estimator once you are satisfied with the results of cross_val_score, before you can use it to predict on test data.

like image 122
Vivek Kumar Avatar answered Nov 08 '22 21:11

Vivek Kumar