Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

In R, how do you loop over the rows of a data frame really fast?

Suppose that you have a data frame with many rows and many columns.

The columns have names. You want to access rows by number, and columns by name.

For example, one (possibly slow) way to loop over the rows is

for (i in 1:nrow(df)) {   print(df[i, "column1"])   # do more things with the data frame... } 

Another way is to create "lists" for separate columns (like column1_list = df[["column1"]), and access the lists in one loop. This approach might be fast, but also inconvenient if you want to access many columns.

Is there a fast way of looping over the rows of a data frame? Is some other data structure better for looping fast?

like image 274
Winston C. Yang Avatar asked Jul 26 '10 17:07

Winston C. Yang


People also ask

How do you repeat rows in a DataFrame?

repeat(3) will create a list where each index value will be repeated 3 times and df. iloc[df. index. repeat(3),:] will help generate a dataframe with the rows as exactly returned by this list.


2 Answers

I think I need to make this a full answer because I find comments harder to track and I already lost one comment on this... There is an example by nullglob that demonstrates the differences among for, and apply family functions much better than other examples. When one makes the function such that it is very slow then that's where all the speed is consumed and you won't find differences among the variations on looping. But when you make the function trivial then you can see how much the looping influences things.

I'd also like to add that some members of the apply family unexplored in other examples have interesting performance properties. First I'll show replications of nullglob's relative results on my machine.

n <- 1e6 system.time(for(i in 1:n) sinI[i] <- sin(i))   user  system elapsed   5.721   0.028   5.712   lapply runs much faster for the same result system.time(sinI <- lapply(1:n,sin))    user  system elapsed    1.353   0.012   1.361  

He also found sapply much slower. Here are some others that weren't tested.

Plain old apply to a matrix version of the data...

mat <- matrix(1:n,ncol =1),1,sin) system.time(sinI <- apply(mat,1,sin))    user  system elapsed    8.478   0.116   8.531  

So, the apply() command itself is substantially slower than the for loop. (for loop is not slowed down appreciably if I use sin(mat[i,1]).

Another one that doesn't seem to be tested in other posts is tapply.

system.time(sinI <- tapply(1:n, 1:n, sin))    user  system elapsed   12.908   0.266  13.589  

Of course, one would never use tapply this way and it's utility is far beyond any such speed problem in most cases.

like image 162
John Avatar answered Sep 23 '22 17:09

John


The fastest way is to not loop (i.e. vectorized operations). One of the only instances in which you need to loop is when there are dependencies (i.e. one iteration depends on another). Otherwise, try to do as much vectorized computation outside the loop as possible.

If you do need to loop, then using a for loop is essentially as fast as anything else (lapply can be a little faster, but other apply functions tend to be around the same speed as for).

like image 29
Shane Avatar answered Sep 24 '22 17:09

Shane