Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

R caret train glmnet final model lambda values not as specified

I was using caret package to tune a glmnet logistic regression model. While the lambda value from best tune is one of the values I specified in the tuneGrid, the lambda values of the final model are totally different:

require(caret)
set.seed(1)
x <- matrix(runif(1000), nrow = 100)
y <- factor(rbinom(100, 1, 0.5))
lambda.seq <- exp(seq(log(1e-5), log(1e0), length.out = 20))

model <- train(x, y, 
               method ="glmnet", 
               family = "binomial", 
               tuneGrid = expand.grid(alpha = 1, 
                                      lambda = lambda.seq))
model$bestTune
#    alpha    lambda
# 13     1 0.0143845
model$finalModel$lambdaOpt
# [1] 0.0143845

model$finalModel$lambda
#  [1] 0.1236344527 0.1126511087 0.1026434947 0.0935249295 0.0852164325 0.0776460395
#  [7] 0.0707481794 0.0644631061 0.0587363814 0.0535184032 0.0487639757 0.0444319185
# [13] 0.0404847094 0.0368881594 0.0336111170 0.0306251980 0.0279045398 0.0254255774
# [19] 0.0231668392 0.0211087610 0.0192335169 0.0175248642 0.0159680036 0.0145494502
# [25] 0.0132569171 0.0120792091 0.0110061255 0.0100283716 0.0091374787 0.0083257303
# [31] 0.0075860954 0.0069121676 0.0062981097 0.0057386030 0.0052288013 0.0047642890
# [37] 0.0043410427 0.0039553964 0.0036040099 0.0032838396 0.0029921123 0.0027263013
# [43] 0.0024841042 0.0022634233 0.0020623470 0.0018791337 0.0017121967 0.0015600899
# [49] 0.0014214958 0.0012952140 0.0011801508 0.0010753094 0.0009797819 0.0008927408

model$finalModel$lambdaOpt %in% lambda.seq
# [1] TRUE

The final model's optimal value of lambda is also not in the list of lambda that the same model supposedly used:

model$finalModel$lambdaOpt %in% model$finalModel$lambda
# [1] FALSE

What explains these discrepancies in lambda?

like image 713
maksay Avatar asked Jul 22 '14 21:07

maksay


1 Answers

The final model is basically a refit with your whole dataset AFTER alpha and lambda were optimized using resampling techniques.

If you print model$finalModel$call you see the call being made (omitted x, y structure for shortness):

    Call:  glmnet(x, y, family = "binomial", alpha = 1)

Here, alpha is set (if you had set a sequence it would be the optimum alpha found), but no specified lambda is set to train, and therefore an automatic sequence is generated based on your data and the model is fitted. It then predicts with the same training set with lambdaOpt (and the rest of the sequence you gave). Take a look at glmnet vignette and how you can specify different lambda after training.

If you type:

    > names(model$modelInfo)
     [1] "label" "library" "type" "parameters" "grid" "loop"       
     [7] "fit" "predict" "prob" "predictors" "varImp" "levels"    
     [13] "tags" "sort" "trim"

and then walk through each of those sections, you can take a look at what trainis doing. You can see in model$modelInfo$predict how it predicts on lambdaOpt and the rest of your sequence.

When you print model$results you actually get your list of lambda and the performance on the whole training set with each one:

 alpha       lambda  Accuracy      Kappa AccuracySD    KappaSD
1      1 1.000000e-05 0.5698940 0.15166891 0.09061320 0.17133524
2      1 1.832981e-05 0.5698940 0.15166891 0.09061320 0.17133524
3      1 3.359818e-05 0.5698940 0.15166891 0.09061320 0.17133524
4      1 6.158482e-05 0.5698940 0.15166891 0.09061320 0.17133524
5      1 1.128838e-04 0.5698940 0.15166891 0.09061320 0.17133524
6      1 2.069138e-04 0.5698940 0.15166891 0.09061320 0.17133524
7      1 3.792690e-04 0.5698940 0.15166891 0.09061320 0.17133524
8      1 6.951928e-04 0.5698940 0.15166891 0.09061320 0.17133524
9      1 1.274275e-03 0.5675708 0.14690433 0.09071728 0.17085665
10     1 2.335721e-03 0.5643334 0.14059590 0.09153010 0.17204036
11     1 4.281332e-03 0.5629588 0.13822063 0.09403553 0.17715441
12     1 7.847600e-03 0.5694974 0.15221600 0.08791315 0.16433922
13     1 1.438450e-02 0.5700431 0.15448347 0.08864353 0.16509332
14     1 2.636651e-02 0.5695053 0.15189752 0.08113581 0.15184619
15     1 4.832930e-02 0.5635977 0.14112303 0.05833646 0.11617226
16     1 8.858668e-02 0.5305835 0.08983718 0.08116759 0.14752307
17     1 1.623777e-01 0.4800871 0.01124082 0.05827521 0.05715298
18     1 2.976351e-01 0.4725241 0.00000000 0.04488500 0.00000000
19     1 5.455595e-01 0.4725241 0.00000000 0.04488500 0.00000000
20     1 1.000000e+00 0.4725241 0.00000000 0.04488500 0.00000000

To summarise what's happening in caret+glmnet:

  1. optimizes alpha and lambdawithin the tuneGrid you provided using resampling techniques;

  2. refits the model, now on the whole training set, with optimal alpha;

  3. predicts on the whole training set with lambdaOpt found in 1. and on the rest of the sequence of lambdas in tuneGrid.

like image 59
quartin Avatar answered Nov 11 '22 08:11

quartin