I am estimating random effects logit model using glmer
and I would like to report Marginal Effects for the independent variables. For glm
models, package mfx
helps compute marginal effects. Is there any package or function for glmer
objects?
Thanks for your help.
A reproducible example is given below
## mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv")
## as of 2020-08-24:
mydata <- read.csv("https://stats.idre.ucla.edu/stat/data/binary.csv")
mydata$rank <- factor(mydata$rank) #creating ranks
id <- rep(1:ceiling(nrow(mydata)/2), times=c(2)) #creating ID variable
mydata <- cbind(mydata,data.frame(id,stringsAsFactors=FALSE))
set.seed(12345)
mydata$ran <- runif(nrow(mydata),0,1) #creating a random variable
library(lme4)
cfelr <- glmer(admit ~ (1 | id) + rank + gpa + ran + gre, data=mydata ,family = binomial)
summary(cfelr)
You could use the ggeffects-package (examples in the package-vignettes). So, for your code this might look like this:
library(ggeffects)
# dat is a data frame with marginal effects
dat <- ggpredict(cfelr, term = "rank")
plot(dat)
or you use, as Benjamin described, the You could use the sjPlot-package, using the plot_model()
function with plot-type "pred"
(this simply wraps the ggeffects package for marginal effect plots):
library(sjPlot)
plot_model(cfelr, type = "pred", term = "rank")
Here's an approach using the margins()
package:
library(margins)
library(lme4)
gm1 <- glmer(cbind(incidence, size - incidence) ~ period +
(1 | herd),
data = cbpp,
family = binomial)
m <- margins(gm1, data = cbpp)
m
This is a much less technical answer, but perhaps provides a useful resource. I am a fan of the sjPlot
package which provides plots of marginal effects of glmer objects, like so:
library(sjPlot)
sjp.glmer(cfelr, type = "eff")
The package provides a lot of options for exploring a glmer model's fixed and random effects as well. https://github.com/strengejacke/sjPlot
My solution does not answer the question,
glmer
object",but rather,
I am only offering this write-up because the reproducible example provided was a conditional logistic regression with one random intercept and I'm intending to be helpful. Please do not downvote; I will take down if this answer is deemed too off topic.
The R-code is based on the work of Patrick Heagerty (click "View Raw" to see pdf), and I include a reproducible example below from my github version of his lnMLE package (excuse the warnings at installation -- I'm shoehorning Patrick's non-CRAN package). I'm omitting the output for all except the last line, compare
, which shows the fixed effect coefficients side-by-side.
library(devtools)
install_github("lnMLE_1.0-2", "swihart")
library(lnMLE)
## run the example from the logit.normal.mle help page
## see also the accompanying document (click 'View Raw' on page below:)
## https://github.com/swihart/lnMLE_1.0-2/blob/master/inst/doc/lnMLEhelp.pdf
data(eye_race)
attach(eye_race)
marg_model <- logit.normal.mle(meanmodel = value ~ black,
logSigma= ~1,
id=eye_race$id,
model="marginal",
data=eye_race,
tol=1e-5,
maxits=100,
r=50)
marg_model
cond_model <- logit.normal.mle(meanmodel = value ~ black,
logSigma= ~1,
id=eye_race$id,
model="conditional",
data=eye_race,
tol=1e-5,
maxits=100,
r=50)
cond_model
compare<-round(cbind(marg_model$beta, cond_model$beta),2)
colnames(compare)<-c("Marginal", "Conditional")
compare
The output of the last line:
compare
Marginal Conditional
(Intercept) -2.43 -4.94
black 0.08 0.15
I attempted the reproducible example given, but had problems with both the glmer and lnMLE implementations; again I only include output pertaining to the comparison results and the warnings from the glmer()
call:
##original question / answer... glmer() function gave a warning and the lnMLE did not fit well...
mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv")
mydata$rank <- factor(mydata$rank) #creating ranks
id <- rep(1:ceiling(nrow(mydata)/2), times=c(2)) #creating ID variable
mydata <- cbind(mydata,data.frame(id,stringsAsFactors=FALSE))
set.seed(12345)
mydata$ran <- runif(nrow(mydata),0,1) #creating a random variable
library(lme4)
cfelr <- glmer(admit ~ (1 | id) + rank + gpa + ran + gre,
data=mydata,
family = binomial)
Which gave:
Warning messages:
1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 0.00161047 (tol = 0.001, component 2)
2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model is nearly unidentifiable: very large eigenvalue
- Rescale variables?;Model is nearly unidentifiable: large eigenvalue ratio
- Rescale variables?
but I foolishly went on without rescaling, trying to apply the logit.normal.mle
to the example given. However, the conditional model doesn't converge or produce standard error estimates,
summary(cfelr)
library(devtools)
install_github("lnMLE_1.0-2", "swihart")
library(lnMLE)
mydata$rank2 = mydata$rank==2
mydata$rank3 = mydata$rank==3
mydata$rank4 = mydata$rank==4
cfelr_cond = logit.normal.mle(meanmodel = admit ~ rank2+rank3+rank4+gpa+ran+gre,
logSigma = ~1 ,
id=id,
model="conditional",
data=mydata,
r=50,
tol=1e-6,
maxits=500)
cfelr_cond
cfelr_marg = logit.normal.mle(meanmodel = admit ~ rank2+rank3+rank4+gpa+ran+gre,
logSigma = ~1 ,
id=id,
model="marginal",
data=mydata,
r=50,
tol=1e-6,
maxits=500)
cfelr_marg
compare_glmer<-round(cbind(cfelr_marg$beta, cfelr_cond$beta,summary(cfelr)$coeff[,"Estimate"]),3)
colnames(compare_glmer)<-c("Marginal", "Conditional","glmer() Conditional")
compare_glmer
The last line of which reveals that the conditional model from cfelr_cond
did not evaluate a conditional model but just returned the marginal coefficients and no standard errors.
> compare_glmer
Marginal Conditional glmer() Conditional
(Intercept) -4.407 -4.407 -4.425
rank2 -0.667 -0.667 -0.680
rank3 -1.832 -1.833 -1.418
rank4 -1.930 -1.930 -1.585
gpa 0.547 0.548 0.869
ran 0.860 0.860 0.413
gre 0.004 0.004 0.002
I hope to iron out these issues. Any help/comments appreciated. I'll give status updates when I can.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With