Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Fastest R equivalent to MATLAB's reshape() method?

I am converting a MATLAB script into R and regretting it so far, as it is slower at the moment. I'm trying to use "vectorized functions" as much as possible, but I'm relatively new to R and do not know what is meant by this. From my research for loops are only slower than the apply() method in R if you use loads of operators (including the parenthesis). Otherwise, I don't see what R could have done to slow down it further. Here is code that works that I want to speed up.

somPEs   <- 9;
inputPEs <- 6;
initial_w <- matrix(1, nrow=somPEs, ncol=inputPEs) 
w <- apply(initial_w, 1, function(i) runif(i));
# Reshape w to a 3D matrix of dimension: c(sqrt(somPEs), sqrt(somPEs), inputPEs)
nw <- array(0, dim=c(sqrt(somPEs), sqrt(somPEs), inputPEs))
for (i in 1:inputPEs) {
  nw[,,i] <- matrix(w[i,], nrow=sqrt(somPEs), ncol=sqrt(somPEs), byrow=TRUE)
}
w <- nw;

In MATLAB, this code is executed by a built-in function called "reshape", as is done as below:

w = reshape(w,[sqrt(somPEs) sqrt(somPEs) inputPEs]);

I timed my current R code and it's actually super fast, but I'd still like to learn about vectorization and how to convert my code to apply() for readability's sake.

user  system elapsed 
0.003   0.000   0.002 
like image 812
lrthistlethwaite Avatar asked Mar 15 '23 16:03

lrthistlethwaite


2 Answers

The first step is to convert your array w from 6x9 to 3x3x6 size, which in your case can be done by transposing and then changing the dimension:

neww <- t(w)
dim(neww) <- c(sqrt(somPEs), sqrt(somPEs), inputPEs)

This is almost what we want, except that the first two dimensions are flipped. You can use the aperm function to transpose them:

neww <- aperm(neww, c(2, 1, 3))

This should be a good deal quicker than looping through the matrix and individually copying over data by row. To see this, let's look at a larger example with 10,000 rows and 100 columns (which will be mapped to a 10x10x10k matrix):

josilber <- function(w) {
  neww <- t(w)
  dim(neww) <- c(sqrt(dim(w)[2]), sqrt(dim(w)[2]), dim(w)[1])
  aperm(neww, c(2, 1, 3))
}
OP <- function(w) {
  nw <- array(0, dim=c(sqrt(dim(w)[2]), sqrt(dim(w)[2]), dim(w)[1]))
  for (i in 1:(dim(w)[1])) {
    nw[,,i] <- matrix(w[i,], nrow=sqrt(dim(w)[2]), ncol=sqrt(dim(w)[2]), byrow=TRUE)
  }
  nw
}
bigw <- matrix(runif(1000000), nrow=10000, ncol=100)
all.equal(josilber(bigw), OP(bigw))
# [1] TRUE
microbenchmark(josilber(bigw), OP(bigw))
# Unit: milliseconds
#            expr       min       lq      mean     median        uq       max neval
#  josilber(bigw)  8.483245  9.08430  14.46876   9.431534  11.76744  135.7204   100
#        OP(bigw) 83.379053 97.07395 133.86606 117.223236 129.28317 1553.4381   100

The approach using t, dim, and aperm is more than 10x faster in median runtime than the looping approach.

like image 144
josliber Avatar answered Mar 18 '23 15:03

josliber


I did not test the speed, but you could try

nw1 <- aperm(`dim<-`(t(w), list(3, 3, 6)), c(2, 1, 3))

> nw1
, , 1

          [,1]      [,2]      [,3]
[1,] 0.8257185 0.5475478 0.4157915
[2,] 0.8436991 0.3310513 0.1546463
[3,] 0.1794918 0.1836032 0.2675192

, , 2

          [,1]      [,2]      [,3]
[1,] 0.6914582 0.1674163 0.2921129
[2,] 0.2558240 0.4269716 0.7335542
[3,] 0.6416367 0.8771934 0.6553210

, , 3

          [,1]       [,2]      [,3]
[1,] 0.9761232 0.05223183 0.6651574
[2,] 0.5740032 0.80621864 0.2295017
[3,] 0.1138926 0.76009870 0.6932736

, , 4

            [,1]      [,2]      [,3]
[1,] 0.437871558 0.5172516 0.1145181
[2,] 0.006923583 0.3235762 0.3751655
[3,] 0.823235642 0.4586850 0.6013853

, , 5

          [,1]      [,2]      [,3]
[1,] 0.7425735 0.1665975 0.8659373
[2,] 0.1418979 0.1878132 0.2357267
[3,] 0.6963537 0.5391961 0.1112467

, , 6

          [,1]       [,2]       [,3]
[1,] 0.7246276 0.02896792 0.04692648
[2,] 0.7563403 0.22027518 0.41138672
[3,] 0.8303413 0.31908307 0.25180560
like image 32
Robert Avatar answered Mar 18 '23 16:03

Robert