Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Speeding up ifelse() without writing C/C++?

I need to get the results of the following function

  getScore <- function(history, similarities) {    
    nh<-ifelse(similarities<0, 6-history,history)
    x <- nh*abs(similarities) 
    contados <- !is.na(history)
    x2 <- sum(x, na.rm=TRUE)/sum(abs(similarities[contados]),na.rm=TRUE)
    x2
    }

For example for the following vectors:

notes <- c(1:5, NA)
history <- sample(notes, 1000000, replace=T)
similarities <- runif(1000000, -1,1)

That changes inside a loop. This takes:

ptm <- proc.time()
for (i in (1:10)) getScore(history, similarities)
proc.time() - ptm

   user  system elapsed 
   3.71    1.11    4.67 

Initially I suspect that the problem is the for loop, but profiling result points to ifelse().

Rprof("foo.out")
for (i in (1:10)) getScore(history, similarities)
Rprof(NULL)
summaryRprof("foo.out")

$by.self
           self.time self.pct total.time total.pct
"ifelse"        2.96    65.78       3.48     77.33
"-"             0.24     5.33       0.24      5.33
"getScore"      0.22     4.89       4.50    100.00
"<"             0.22     4.89       0.22      4.89
"*"             0.22     4.89       0.22      4.89
"abs"           0.22     4.89       0.22      4.89
"sum"           0.22     4.89       0.22      4.89
"is.na"         0.12     2.67       0.12      2.67
"!"             0.08     1.78       0.08      1.78

$by.total
           total.time total.pct self.time self.pct
"getScore"       4.50    100.00      0.22     4.89
"ifelse"         3.48     77.33      2.96    65.78
"-"              0.24      5.33      0.24     5.33
"<"              0.22      4.89      0.22     4.89
"*"              0.22      4.89      0.22     4.89
"abs"            0.22      4.89      0.22     4.89
"sum"            0.22      4.89      0.22     4.89
"is.na"          0.12      2.67      0.12     2.67
"!"              0.08      1.78      0.08     1.78

$sample.interval
[1] 0.02

$sampling.time
[1] 4.5

ifelse() is my performance bottleneck. Unless there is a way in R to speed up ifelse(), there is unlikely to be great performance boost.

However, ifelse() is already the vectorized approach. It seems to me that the only chance left is to use C/C++. But is there a way to avoid using compiled code?

like image 569
GabyLP Avatar asked Jun 24 '16 02:06

GabyLP


2 Answers

You can use logical multiplication for this task to achieve the same effect:

s <- similarities < 0
nh <- s*(6-history) + (!s)*history

Benchmark on i7-3740QM:

f1 <- function(history, similarities) { s <- similarities < 0
                                        s*(6-history) + (!s)*history}
f2 <- function(history, similarities) ifelse(similarities<0, 6-history,history)
f3 <- function(history, similarities) {  nh <- history
                                         ind <- similarities<0
                                         nh[ind] <- 6 - nh[ind]
                                         nh }

microbenchmark(f1(history, similarities), 
               f2(history, similarities), 
               f3(history, similarities))
## Unit: milliseconds
##                       expr        min          lq         mean             median         uq        max neval cld
##  f1(history, similarities)  22.830260  24.6167695  28.31384860  24.89869950000000  25.651655  81.043713   100 a  
##  f2(history, similarities) 364.514460 412.7117810 408.37156626 415.10114899999996 417.345748 437.977256   100   c
##  f3(history, similarities)  84.220279  86.2894795  92.64614571  87.18016549999999  89.616522 149.243051   100  b 

On E5-2680 v2:

## Unit: milliseconds
##                       expr       min        lq      mean    median        uq       max neval cld
##  f1(history, similarities)  20.03963  20.10954  21.41055  20.68597  21.25920  50.95278   100 a  
##  f2(history, similarities) 314.54913 315.96621 324.91486 319.50290 325.93168 378.26016   100   c
##  f3(history, similarities)  73.81413  73.92162  76.10418  74.79893  75.84634 105.98770   100  b 

On T5600 (Core2 Duo Mobile):

## Unit: milliseconds
                      expr      min       lq     mean   median       uq      max neval cld
##  f1(history, similarities) 147.2953 152.9307 171.0870 155.5632 167.0998 344.7524   100  b 
##  f2(history, similarities) 408.5728 493.3886 517.0573 501.6993 525.8573 797.9624   100   c
##  f3(history, similarities) 102.9621 110.6003 131.1826 112.9961 125.3906 303.1170   100 a  

Aha! My approach is slower on the Core 2 architecture.

like image 96
Matthew Lundberg Avatar answered Nov 12 '22 03:11

Matthew Lundberg


I have encountered this before. We don't have to use ifelse() all the time. If you have a look at how ifelse is written, by typing "ifelse" in your R console, you can see that this function is written in R language, and it does various checking which is really inefficient.

Instead of using ifelse(), we can do this:

getScore <- function(history, similarities) {
  ######## old code #######
  # nh <- ifelse(similarities < 0, 6 - history, history)
  ######## old code #######
  ######## new code #######
  nh <- history
  ind <- similarities < 0
  nh[ind] <- 6 - nh[ind]
  ######## new code #######
  x <- nh * abs(similarities) 
  contados <- !is.na(history)
  sum(x, na.rm=TRUE) / sum(abs(similarities[contados]), na.rm = TRUE)
  }

And then let's check profiling result again:

Rprof("foo.out")
for (i in (1:10)) getScore(history, similarities)
Rprof(NULL)
summaryRprof("foo.out")

# $by.total
#            total.time total.pct self.time self.pct
# "getScore"       2.10    100.00      0.88    41.90
# "abs"            0.32     15.24      0.32    15.24
# "*"              0.26     12.38      0.26    12.38
# "sum"            0.26     12.38      0.26    12.38
# "<"              0.14      6.67      0.14     6.67
# "-"              0.14      6.67      0.14     6.67
# "!"              0.06      2.86      0.06     2.86
# "is.na"          0.04      1.90      0.04     1.90

# $sample.interval
# [1] 0.02

# $sampling.time
# [1] 2.1

We have a 2+ times boost in performance. Furthermore, the profile is more like a flat profile, without any single part dominating execution time.

In R, vector indexing / reading / writing is at speed of C code, so whenever we can, use a vector.


Testing @Matthew's answer

mat_getScore <- function(history, similarities) {
  ######## old code #######
  # nh <- ifelse(similarities < 0, 6 - history, history)
  ######## old code #######
  ######## new code #######
  ind <- similarities < 0
  nh <- ind*(6-history) + (!ind)*history
  ######## new code #######
  x <- nh * abs(similarities) 
  contados <- !is.na(history)
  sum(x, na.rm=TRUE) / sum(abs(similarities[contados]), na.rm = TRUE)
  }

Rprof("foo.out")
for (i in (1:10)) mat_getScore(history, similarities)
Rprof(NULL)
summaryRprof("foo.out")

# $by.total
#                total.time total.pct self.time self.pct
# "mat_getScore"       2.60    100.00      0.24     9.23
# "*"                  0.76     29.23      0.76    29.23
# "!"                  0.40     15.38      0.40    15.38
# "-"                  0.34     13.08      0.34    13.08
# "+"                  0.26     10.00      0.26    10.00
# "abs"                0.20      7.69      0.20     7.69
# "sum"                0.18      6.92      0.18     6.92
# "<"                  0.16      6.15      0.16     6.15
# "is.na"              0.06      2.31      0.06     2.31

# $sample.interval
# [1] 0.02

# $sampling.time
# [1] 2.6

Ah? Slower?

The full profiling result shows that this approach spends more time on floating point multiplication "*", and the logical not "!" seems pretty expensive. While my approach requires floating point addition / subtraction only.

Well, The result might be also architecture dependent. I am testing on Intel Nahalem (Intel Core 2 Duo) at the moment. So benchmarking between two approaches on various platforms are welcomed.


Remark

All profiling are using OP's data in the question.

like image 5
Zheyuan Li Avatar answered Nov 12 '22 03:11

Zheyuan Li