Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between predict(model) and predict(model$finalModel) using caret for classification in R

Whats the difference between

predict(rf, newdata=testSet)

and

predict(rf$finalModel, newdata=testSet) 

i train the model with preProcess=c("center", "scale")

tc <- trainControl("repeatedcv", number=10, repeats=10, classProbs=TRUE, savePred=T)
rf <- train(y~., data=trainingSet, method="rf", trControl=tc, preProc=c("center", "scale"))

and i receive 0 true positives when i run it on a centered and scaled testSet

testSetCS <- testSet
xTrans <- preProcess(testSetCS)
testSetCS<- predict(xTrans, testSet)
testSet$Prediction <- predict(rf, newdata=testSet)
testSetCS$Prediction <- predict(rf, newdata=testSetCS)

but receive some true positives when i run it on an unscaled testSet. I have to use the rf$finalModel to receive some true postives on the centered and scaled testSet and the rf object on the unscaled...what am i missing?


edit

tests:

tc <- trainControl("repeatedcv", number=10, repeats=10, classProbs=TRUE, savePred=T)
RF <-  train(Y~., data= trainingSet, method="rf", trControl=tc) #normal trainingData
RF.CS <- train(Y~., data= trainingSet, method="rf", trControl=tc, preProc=c("center", "scale")) #scaled and centered trainingData

on normal testSet:

RF predicts reasonable              (Sensitivity= 0.33, Specificity=0.97)
RF$finalModel predicts bad       (Sensitivity= 0.74, Specificity=0.36)
RF.CS predicts reasonable           (Sensitivity= 0.31, Specificity=0.97)
RF.CS$finalModel same results like RF.CS    (Sensitivity= 0.31, Specificity=0.97)

on centered and scaled testSetCS:

RF predicts very bad                (Sensitivity= 0.00, Specificity=1.00)
RF$finalModel predicts reasonable       (Sensitivity= 0.33, Specificity=0.98)
RF.CS predicts like RF              (Sensitivity= 0.00, Specificity=1.00)
RF.CS$finalModel predicts like RF       (Sensitivity= 0.00, Specificity=1.00)

so it seems as if the $finalModel needs the same format of trainingSet and testSet whereas the trained object accepts only uncentered and unscaled data, regardless of the selected preProcess parameter?

prediction code (where testSet is normal data and testSetCS is centered and scaled ):

testSet$Prediction <- predict(RF, newdata=testSet)
testSet$PredictionFM <- predict(RF$finalModel, newdata=testSet)
testSet$PredictionCS <- predict(RF.CS, newdata=testSet)
testSet$PredictionCSFM <- predict(RF.CS$finalModel, newdata=testSet)

testSetCS$Prediction <- predict(RF, newdata=testSetCS)
testSetCS$PredictionFM <- predict(RF$finalModel, newdata=testSetCS)
testSetCS$PredictionCS <- predict(RF.CS, newdata=testSetCS)
testSetCS$PredictionCSFM <- predict(RF.CS$finalModel, newdata=testSetCS)
like image 460
Frank Avatar asked Jan 13 '14 16:01

Frank


People also ask

What is the caret package in R?

Caret is a one-stop solution for machine learning in R. The R package caret has a powerful train function that allows you to fit over 230 different models using one syntax. There are over 230 models included in the package including various tree-based models, neural nets, deep learning and much more.

What does it mean to train a model in R?

As its name suggests, it is used to train a model, that is, to apply an algorithm to a set of data and create a model which represents that dataset. The train() function has three basic parameters: Formula. Dataset. Method (or algorithm)


1 Answers

Frank,

This is really similar to your other question on Cross Validated.

You really need to

1) show your exact prediction code for each result

2) give us a reproducible example.

With the normal testSet, RF.CS and RF.CS$finalModel should not be giving you the same results and we should be able to reproduce that. Plus, there are syntax errors in your code so it can't be exactly what you executed.

Finally, I'm not really sure why you would use the finalModel object at all. The point of train is to handle the details and doing things this way (which is your option) circumvents the complete set of code that would normally be applied.

Here is a reproducible example:

 library(mlbench)
 data(Sonar)

 set.seed(1)
 inTrain <- createDataPartition(Sonar$Class)
 training <- Sonar[inTrain[[1]], ]
 testing <- Sonar[-inTrain[[1]], ]

 pp <- preProcess(training[,-ncol(Sonar)])
 training2 <- predict(pp, training[,-ncol(Sonar)])
 training2$Class <- training$Class
 testing2 <- predict(pp, testing[,-ncol(Sonar)])
 testing2$Class <- testing2$Class

 tc <- trainControl("repeatedcv", 
                    number=10, 
                    repeats=10, 
                    classProbs=TRUE, 
                    savePred=T)
 set.seed(2)
 RF <-  train(Class~., data= training, 
              method="rf", 
              trControl=tc)
 #normal trainingData
 set.seed(2)
 RF.CS <- train(Class~., data= training, 
                method="rf", 
                trControl=tc, 
                preProc=c("center", "scale")) 
 #scaled and centered trainingData

Here are some results:

 > ## These should not be the same
 > all.equal(predict(RF, testing,  type = "prob")[,1],
 +           predict(RF, testing2, type = "prob")[,1])
 [1] "Mean relative difference: 0.4067554"
 > 
 > ## Nor should these
 > all.equal(predict(RF.CS, testing,  type = "prob")[,1],
 +           predict(RF.CS, testing2, type = "prob")[,1])
 [1] "Mean relative difference: 0.3924037"
 > 
 > all.equal(predict(RF.CS,            testing, type = "prob")[,1],
 +           predict(RF.CS$finalModel, testing, type = "prob")[,1])
 [1] "names for current but not for target"
 [2] "Mean relative difference: 0.7452435" 
 >
 > ## These should be and are close (just based on the 
 > ## random sampling used in the final RF fits)
 > all.equal(predict(RF,    testing, type = "prob")[,1],
 +           predict(RF.CS, testing, type = "prob")[,1])
 [1] "Mean relative difference: 0.04198887"

Max

like image 138
topepo Avatar answered Oct 19 '22 05:10

topepo