The Julia examples to compare performance against R seem particularly convoluted. https://github.com/JuliaLang/julia/blob/master/test/perf/perf.R
What is the fastest performance you can eke out of the two algorithms below (preferably with an explanation of what you changed to make it more R-like)?
## mandel mandel = function(z) { c = z maxiter = 80 for (n in 1:maxiter) { if (Mod(z) > 2) return(n-1) z = z^2+c } return(maxiter) } mandelperf = function() { re = seq(-2,0.5,.1) im = seq(-1,1,.1) M = matrix(0.0,nrow=length(re),ncol=length(im)) count = 1 for (r in re) { for (i in im) { M[count] = mandel(complex(real=r,imag=i)) count = count + 1 } } return(M) } assert(sum(mandelperf()) == 14791) ## quicksort ## qsort_kernel = function(a, lo, hi) { i = lo j = hi while (i < hi) { pivot = a[floor((lo+hi)/2)] while (i <= j) { while (a[i] < pivot) i = i + 1 while (a[j] > pivot) j = j - 1 if (i <= j) { t = a[i] a[i] = a[j] a[j] = t } i = i + 1; j = j - 1; } if (lo < j) qsort_kernel(a, lo, j) lo = i j = hi } return(a) } qsort = function(a) { return(qsort_kernel(a, 1, length(a))) } sortperf = function(n) { v = runif(n) return(qsort(v)) } sortperf(5000)
The key word in this question is "algorithm":
What is the fastest performance you can eke out of the two algorithms below (preferably with an explanation of what you changed to make it more R-like)?
As in "how fast can you make these algorithms in R?" The algorithms in question here are the standard Mandelbrot complex loop iteration algorithm and the standard recursive quicksort kernel.
There are certainly faster ways to compute the answers to the problems posed in these benchmarks – but not using the same algorithms. You can avoid recursion, avoid iteration, and avoid whatever else R isn't good at. But then you're no longer comparing the same algorithms.
If you really wanted to compute Mandelbrot sets in R or sort numbers, yes, this is not how you would write the code. You would either vectorize it as much as possible – thereby pushing all the work into predefined C kernels – or just write a custom C extension and do the computation there. Either way, the conclusion is that R isn't fast enough to get really good performance on its own – you need have C do most of the work in order to get good performance.
And that's exactly the point of these benchmarks: in Julia you never have to rely on C code to get good performance. You can just write what you want to do in pure Julia and it will have good performance. If an iterative scalar loop algorithm is the most natural way to do what you want to do, then just do that. If recursion is the most natural way to solve the problem, then that's ok too. At no point will you be forced to rely on C for performance – whether via unnatural vectorization or writing custom C extensions. Of course, you can write vectorized code when it's natural, as it often is in linear algebra; and you can call C if you already have some library that does what you want. But you don't have to.
We do want to have the fairest possible comparison of the same algorithms across languages:
Hmm, in the Mandelbrot example the matrix M has its dimensions transposed
M = matrix(0.0,nrow=length(im), ncol=length(re))
because it's filled by incrementing count
in the inner loop (successive values of im
). My implementation creates a vector of complex numbers in mandelperf.1
and operates on all elements, using an index and subsetting to keep track of which elements of the vector have not yet satisfied the condition Mod(z) <= 2
mandel.1 = function(z, maxiter=80L) { c <- z result <- integer(length(z)) i <- seq_along(z) n <- 0L while (n < maxiter && length(z)) { j <- Mod(z) <= 2 if (!all(j)) { result[i[!j]] <- n i <- i[j] z <- z[j] c <- c[j] } z <- z^2 + c n <- n + 1L } result[i] <- maxiter result } mandelperf.1 = function() { re = seq(-2,0.5,.1) im = seq(-1,1,.1) mandel.1(complex(real=rep(re, each=length(im)), imaginary=im)) }
for a 13-fold speed-up (the results are equal but not identical because the original returns numeric rather than integer values).
> library(rbenchmark) > benchmark(mandelperf(), mandelperf.1(), + columns=c("test", "elapsed", "relative"), + order="relative") test elapsed relative 2 mandelperf.1() 0.412 1.00000 1 mandelperf() 5.705 13.84709 > all.equal(sum(mandelperf()), sum(mandelperf.1())) [1] TRUE
The quicksort example doesn't actually sort
> set.seed(123L); qsort(sample(5)) [1] 2 4 1 3 5
but my main speed-up was to vectorize the partition around the pivot
qsort_kernel.1 = function(a) { if (length(a) < 2L) return(a) pivot <- a[floor(length(a) / 2)] c(qsort_kernel.1(a[a < pivot]), a[a == pivot], qsort_kernel.1(a[a > pivot])) } qsort.1 = function(a) { qsort_kernel.1(a) } sortperf.1 = function(n) { v = runif(n) return(qsort.1(v)) }
for a 7-fold speedup (in comparison to the uncorrected original)
> benchmark(sortperf(5000), sortperf.1(5000), + columns=c("test", "elapsed", "relative"), + order="relative") test elapsed relative 2 sortperf.1(5000) 6.60 1.000000 1 sortperf(5000) 47.73 7.231818
Since in the original comparison Julia is about 30 times faster than R for mandel, and 500 times faster for quicksort, the implementations above are still not really competitive.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With