Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is the xgboost documentation wrong ? (early stopping rounds and best and last iteration)

here below is a question about xgboost early stopping rounds parameter and how it does, or does not, give the best iteration when it is the reason why the fit ends.

In xgboost documentation, one can see in the scikit learn api section (link) that when the fit stops due to the early stopping rounds parameter:

Activates early stopping. Validation error needs to decrease at least every "early_stopping_rounds" round(s) to continue training. Requires at least one item in evals. If there’s more than one, will use the last. Returns the model from the last iteration (not the best one).

When reeding this, it seems that the model returned, in this case, is not the best one but the last one. To access the best one when predict, it says, it is possible to call the predict using the ntree_limit parameter with the bst.best_ntree_limit given at the end of the fit.

In this sense, it should work the same way as the train of xgboost since the fit of the scikitlearn api seems to be only an embedding of the train and others.

It is wiedly discussed here stack overflow discussion or here another discussion

But when I tried to address this problem and check how it worked with my data, I did not find the behavior that I thought I should have. In fact the behavior I encountered was not at all the one discribed in those discussions and documentation.

I call a fit this way:

reg = xgb.XGBRegressor(n_jobs=6, n_estimators = 100, max_depth= 5)

reg.fit(
   X_train, 
   y_train, 
   eval_metric='rmse',    
   eval_set=[(X_train, y_train), (X_valid, y_valid)],
   verbose=True,
   early_stopping_rounds = 6)

and here is what I get in the end:

[71]    validation_0-rmse:1.70071   validation_1-rmse:1.9382
[72]    validation_0-rmse:1.69806   validation_1-rmse:1.93825
[73]    validation_0-rmse:1.69732   validation_1-rmse:1.93803
Stopping. Best iteration:
[67]    validation_0-rmse:1.70768   validation_1-rmse:1.93734

and when I check the values of the validation I used :

y_pred_valid = reg.predict(X_valid)
y_pred_valid_df = pd.DataFrame(y_pred_valid)
sqrt(mse(y_valid, y_pred_valid_df[0]))

I get

1.9373418403889535

If the fit had return the last iteration instead of the best one it should have given an rmse around 1.93803 but it gave an rmse at 1.93734, exactly the best score.

I checked again by two ways: [Edit] I've edited the code below according to @Eran Moshe answer

y_pred_valid = reg.predict(X_valid, ntree_limit=reg.best_ntree_limit)
y_pred_valid_df = pd.DataFrame(y_pred_valid)
sqrt(mse(y_valid, y_pred_valid_df[0]))

1.9373418403889535

and even if I call the fit (knowing the best iter is the 67th) with only 68 estimators so that I'm sure the last one is the best one:

reg = xgb.XGBRegressor(n_jobs=6, n_estimators = 68, max_depth= 5)

reg.fit(
   X_train, 
   y_train, 
   eval_metric='rmse',    
   eval_set=[(X_train, y_train), (X_valid, y_valid)],
   verbose=True,
   early_stopping_rounds = 6)

the result is the same:

1.9373418403889535

So that seems to lead to the idea that, unlike the documentation, and those numerous discussions about it, tell, the fit of xgboost, when stopped by the early stopping round parameter, does give the best iter, not the last one.

Am I wrong, if so, where, and how do you explain the behavior I met ?

Thanks for the attention

like image 663
Lyxthe Lyxos Avatar asked Nov 26 '18 14:11

Lyxthe Lyxos


People also ask

What is early stopping rounds XGBoost?

Early stopping is a technique used to stop training when the loss on validation dataset starts increase (in the case of minimizing the loss). That's why to train a model (any model, not only Xgboost) you need two separate datasets: training data for model fitting, validation data for loss monitoring and early stopping.

When should I stop XGBoost?

During the tree building process, XGBoost automatically stops if there is a node without enough cover (the sum of the Hessians of all the data falling into that node) or if it reaches the maximum depth.

Is XGBoost prone to overfitting?

xgboost Modeling , If there are too many variables , The model is prone to over fitting , The English term is overfitting, The following figure to the right . If there are too few variables , Easy under fitting , The English term is underfitting, The following figure left .


2 Answers

I think, it is not wrong, but inconsistent.

The documentation of the predict method is correct (e.g. see here). To bee 100% sure it is better to look into the code: xgb github, so predict behaves as is stated in it's documentation, but the fit documentation is outdated. Please, post it as an issue on XGB github and either they will fix the docs or you will and will become an XGB contributer :)

like image 199
Mischa Lisovyi Avatar answered Oct 21 '22 22:10

Mischa Lisovyi


You have a code error there.

Notice how

reg.predict(X_valid, ntree_limit=reg.best_ntree_limit)

Should be

y_pred_valid = reg.predict(X_valid, ntree_limit=reg.best_ntree_limit)

So in fact you're making the same comparison, when calculating

sqrt(mse(y_valid, y_pred_valid_df[0]))

Xgboost is working just as you've read. early_stopping_round = x will train until it didn't improve for x consecutive rounds.

And when predicting with ntree_limit=y it'll use ONLY the first y Boosters.

like image 32
Eran Moshe Avatar answered Oct 22 '22 00:10

Eran Moshe