Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

gbm::interact.gbm vs. dismo::gbm.interactions

Background

The reference manual for the gbm package states the interact.gbm function computes Friedman's H-statistic to assess the strength of variable interactions. the H-statistic is on the scale of [0-1].

The reference manual for the dismo package does not reference any literature for how the gbm.interactions function detects and models interactions. Instead it gives a list of general procedures used to detect and model interactions. The dismo vignette "Boosted Regression Trees for ecological modeling" states that the dismo package extends functions in the gbm package.

Question

How does dismo::gbm.interactions actually detect and model interactions?

Why

I am asking this question because gbm.interactions in the dismo package yields results >1, which the gbm package reference manual says is not possible.

I checked the tar.gz for each of the packages to see if the source code was similar. It is different enough that I cannot determine if these two packages are using the same method to detect and model interactions.

like image 552
GNG Avatar asked Apr 05 '15 00:04

GNG


1 Answers

To summarize, the difference between the two approaches boils down to how the "partial dependence function" of the two predictors is estimated.

The dismo package is based on code originally given in Elith et al., 2008 and you can find the original source in the supplementary material. The paper very briefly describes the procedure. Basically the model predictions are obtained over a grid of two predictors, setting all other predictors at their means. The model predictions are then regressed onto the grid. The mean squared errors of this model are then multiplied by 1000. This statistic indicates departures of the model predictions from a linear combination of the predictors, indicating a possible interaction.

From the dismo package, we can also obtain the relevant source code for gbm.interactions. The interaction test boils down to the following commands (copied directly from source):

interaction.test.model <- lm(prediction ~ as.factor(pred.frame[,1]) + as.factor(pred.frame[,2]))

interaction.flag <- round(mean(resid(interaction.test.model)^2) * 1000,2)

pred.frame contains a grid of the two predictors in question, and prediction is the prediction from the original gbm fitted model where all but two predictors under consideration are set at their means.

This is different than Friedman's H statistic (Friedman & Popescue, 2005), which is estimated via formula (44) for any pair of predictors. This is essentially the departure from additivity for any two predictors averaging over the values of the other variables, NOT setting the other variables at their means. It is expressed as a percent of the total variance of the partial dependence function of the two variables (or model implied predictions) so will always be between 0-1.

like image 89
patr1ckm Avatar answered Nov 13 '22 15:11

patr1ckm