I am trying to do fixed effects linear regression with R. My data looks like
dte yr id v1 v2
. . . . .
. . . . .
. . . . .
I then decided to simply do this by making yr
a factor and use lm
:
lm(v1 ~ factor(yr) + v2 - 1, data = df)
However, this seems to run out of memory. I have 20 levels in my factor and df
is 14 million rows which takes about 2GB to store, I am running this on a machine with 22 GB dedicated to this process.
I then decided to try things the old fashioned way: create dummy variables for each of my years t1
to t20
by doing:
df$t1 <- 1*(df$yr==1)
df$t2 <- 1*(df$yr==2)
df$t3 <- 1*(df$yr==3)
...
and simply compute:
solve(crossprod(x), crossprod(x,y))
This runs without a problem and produces the answer almost right away.
I am specifically curious what is it about lm that makes it run out of memory when I can compute the coefficients just fine? Thanks.
None of the answers so far has pointed to the right direction.
The accepted answer by @idr is making confusion between lm
and summary.lm
. lm
computes no diagnostic statistics at all; instead, summary.lm
does. So he is talking about summary.lm
.
@Jake's answer is a fact on the numeric stability of QR factorization and LU / Choleksy factorization. Aravindakshan's answer expands this, by pointing out the amount of floating point operations behind both operations (though as he said, he did not count in the costs for computing matrix cross product). But, do not confuse FLOP counts with memory costs. Actually both method have the same memory usage in LINPACK / LAPACK. Specifically, his argument that QR method costs more RAM to store Q
factor is a bogus one. The compacted storage as explained in lm(): What is qraux returned by QR decomposition in LINPACK / LAPACK clarifies how QR factorization is computed and stored. Speed issue of QR v.s. Chol is detailed in my answer: Why the built-in lm function is so slow in R?, and my answer on faster lm
provides a small routine lm.chol
using Choleksy method, which is 3 times faster than QR method.
@Greg's answer / suggestion for biglm
is good, but it does not answer the question. Since biglm
is mentioned, I would point out that QR decomposition differs in lm
and biglm
. biglm
computes householder reflection so that the resulting R
factor has positive diagonals. See Cholesky factor via QR factorization for details. The reason that biglm
does this, is that the resulting R
will be as same as the Cholesky factor, see QR decomposition and Choleski decomposition in R for information. Also, apart from biglm
, you can use mgcv
. Read my answer: biglm
predict unable to allocate a vector of size xx.x MB for more.
After a summary, it is time to post my answer.
In order to fit a linear model, lm
will
lm.fit
for QR factorization;lmObject
.You said your input data frame with 5 columns costs 2 GB to store. With 20 factor levels the resulting model matrix has about 25 columns taking 10 GB storage. Now let's see how memory usage grows when we call lm
.
lm
envrionment] then it is copied to a model frame, costing 2 GB;lm
environment] then a model matrix is generated, costing 10 GB;lm.fit
environment] a copy of model matrix is made then overwritten by QR factorization, costing 10 GB;lm
environment] the result of lm.fit
is returned, costing 10 GB;lm.fit
is further returned by lm
, costing another 10 GB;lm
, costing 2 GB.So, a total of 46 GB RAM is required, far greater than your available 22 GB RAM.
Actually if lm.fit
can be "inlined" into lm
, we could save 20 GB costs. But there is no way to inline an R function in another R function.
Maybe we can take a small example to see what happens around lm.fit
:
X <- matrix(rnorm(30), 10, 3) # a `10 * 3` model matrix
y <- rnorm(10) ## response vector
tracemem(X)
# [1] "<0xa5e5ed0>"
qrfit <- lm.fit(X, y)
# tracemem[0xa5e5ed0 -> 0xa1fba88]: lm.fit
So indeed, X
is copied when passed into lm.fit
. Let's have a look at what qrfit
has
str(qrfit)
#List of 8
# $ coefficients : Named num [1:3] 0.164 0.716 -0.912
# ..- attr(*, "names")= chr [1:3] "x1" "x2" "x3"
# $ residuals : num [1:10] 0.4 -0.251 0.8 -0.966 -0.186 ...
# $ effects : Named num [1:10] -1.172 0.169 1.421 -1.307 -0.432 ...
# ..- attr(*, "names")= chr [1:10] "x1" "x2" "x3" "" ...
# $ rank : int 3
# $ fitted.values: num [1:10] -0.466 -0.449 -0.262 -1.236 0.578 ...
# $ assign : NULL
# $ qr :List of 5
# ..$ qr : num [1:10, 1:3] -1.838 -0.23 0.204 -0.199 0.647 ...
# ..$ qraux: num [1:3] 1.13 1.12 1.4
# ..$ pivot: int [1:3] 1 2 3
# ..$ tol : num 1e-07
# ..$ rank : int 3
# ..- attr(*, "class")= chr "qr"
# $ df.residual : int 7
Note that the compact QR matrix qrfit$qr$qr
is as large as model matrix X
. It is created inside lm.fit
, but on exit of lm.fit
, it is copied. So in total, we will have 3 "copies" of X
:
lm.fit
, the overwritten by QR factorization;lm.fit
.In your case, X
is 10 GB, so the memory costs associated with lm.fit
alone is already 30 GB. Let alone other costs associated with lm
.
On the other hand, let's have a look at
solve(crossprod(X), crossprod(X,y))
X
takes 10 GB, but crossprod(X)
is only a 25 * 25
matrix, and crossprod(X,y)
is just a length-25 vector. They are so tiny compared with X
, thus memory usage does not increase at all.
Maybe you are worried that a local copy of X
will be made when crossprod
is called? Not at all! Unlike lm.fit
which performs both read and write to X
, crossprod
only reads X
, so no copy is made. We can verify this with our toy matrix X
by:
tracemem(X)
crossprod(X)
You will see no copying message!
If you want a short summary for all above, here it is:
lm.fit(X, y)
(or even .lm.fit(X, y)
) is three times as large as that for solve(crossprod(X), crossprod(X,y))
;lm
is 3 ~ 6 times as large as that for solve(crossprod(X), crossprod(X,y))
. The lower bound 3 is never reached, while the upper bound 6 is reached when the model matrix is as same as the model frame. This is the case when there is no factor variables or "factor-alike" terms, like bs()
and poly()
, etc.In addition to what idris said, it's also worth pointing out that lm() does not solve for the parameters using the normal equations like you illustrated in your question, but rather uses QR decomposition, which is less efficient but tends to produce more numerically accurate solutions.
You might want to consider using the biglm package. It fits lm models by using smaller chunks of data.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With