I have a quite large data frame, about 10 millions of rows. It has columns x
and y
, and what I want is to compute
hypot <- function(x) {sqrt(x[1]^2 + x[2]^2)}
for each row. Using apply
it would take a lot of time (about 5 minutes, interpolating from lower sizes) and memory.
But it seems to be too much for me, so I've tried different things:
hypot
function reduces the time by about 10%plyr
greatly increases the running time.What's the fastest way to do this thing?
What about with(my_data,sqrt(x^2+y^2))
?
set.seed(101)
d <- data.frame(x=runif(1e5),y=runif(1e5))
library(rbenchmark)
Two different per-line functions, one taking advantage of vectorization:
hypot <- function(x) sqrt(x[1]^2+x[2]^2)
hypot2 <- function(x) sqrt(sum(x^2))
Try compiling these too:
library(compiler)
chypot <- cmpfun(hypot)
chypot2 <- cmpfun(hypot2)
benchmark(sqrt(d[,1]^2+d[,2]^2),
with(d,sqrt(x^2+y^2)),
apply(d,1,hypot),
apply(d,1,hypot2),
apply(d,1,chypot),
apply(d,1,chypot2),
replications=50)
Results:
test replications elapsed relative user.self sys.self
5 apply(d, 1, chypot) 50 61.147 244.588 60.480 0.172
6 apply(d, 1, chypot2) 50 33.971 135.884 33.658 0.172
3 apply(d, 1, hypot) 50 63.920 255.680 63.308 0.364
4 apply(d, 1, hypot2) 50 36.657 146.628 36.218 0.260
1 sqrt(d[, 1]^2 + d[, 2]^2) 50 0.265 1.060 0.124 0.144
2 with(d, sqrt(x^2 + y^2)) 50 0.250 1.000 0.100 0.144
As expected the with()
solution and the column-indexing solution à la Tyler Rinker are essentially identical; hypot2
is twice as fast as the original hypot
(but still about 150 times slower than the vectorized solutions). As already pointed out by the OP, compilation doesn't help very much.
While Ben Bolkers answer is comprehensive, I will explain other reasons to avoid apply
on data.frames.
apply
will convert your data.frame
to a matrix. This will create a copy (waste of time and memory), as well as perhaps causing unintended type conversions.
Given that you have 10 million rows of data, I would suggest you look at the data.table
package that will let you do things efficiently in terms of memory and time.
For example, using tracemem
x <- apply(d,1, hypot2)
tracemem[0x2f2f4410 -> 0x2f31b8b8]: as.matrix.data.frame as.matrix apply
This is even worse if you then assign to a column in d
d$x <- apply(d,1, hypot2)
tracemem[0x2f2f4410 -> 0x2ee71cb8]: as.matrix.data.frame as.matrix apply
tracemem[0x2f2f4410 -> 0x2fa9c878]:
tracemem[0x2fa9c878 -> 0x2fa9c3d8]: $<-.data.frame $<-
tracemem[0x2fa9c3d8 -> 0x2fa9c1b8]: $<-.data.frame $<-
4 copies! -- with 10 million rows, that will probably come and bite you at somepoint.
If we use with
, there is no copying
involved, if we assign to a vector
y <- with(d, sqrt(x^2 + y^2))
But there will be if we assign to a column in the data.frame d
d$y <- with(d, sqrt(x^2 + y^2))
tracemem[0x2fa9c1b8 -> 0x2faa00d8]:
tracemem[0x2faa00d8 -> 0x2faa0f48]: $<-.data.frame $<-
tracemem[0x2faa0f48 -> 0x2faa0d08]: $<-.data.frame $<-
Now, if you use data.table
and :=
to assign by reference (no copying)
library(data.table)
DT <- data.table(d)
tracemem(DT)
[1] "<0x2d67a9a0>"
DT[,y := sqrt(x^2 + y^2)]
No copies!
Perhaps I will be corrected here, but another memory issue to consider is that sqrt(x^2+y^2))
will create 4 temporary variables (internally) x^2
, y^2
, x^2 + y^2
and then sqrt(x^2 + y^2))
The following will be slower, but only involve two variables being created.
DT[, rowid := .I] # previous option: DT[, rowid := seq_len(nrow(DT))]
DT[, y2 := sqrt(x^2 + y^2), by = rowid]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With