Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Eigendecompositions are 5 times slower in Julia than in Mathematica?

I am new to Julia and primarily work in Mathematica, so I probably have a few elementary mistakes floating around. I attempted to time how long Julia took to compute the eigensystem of a random matrix, and found it was 5-6 times slower than in Mathematica.

In Julia:

D=1000*(rand(1000,1000)-0.5);
@time (E,F)=eig(D);

Out: elapsed time: 7.47950706 seconds (79638920 bytes allocated*)

In Mathematica:

First@Timing@Eigensystem[RandomReal[{-500, 500}, {1000, 1000}]]

Out: 1.310408

For 2000 x 2000 arrays it's similar, although the Julia result slowed down slightly less than the equivalent Mathematica call, but it's still slower; Julia takes 22 seconds, whereas Mathematica computes it in 8 seconds.

As far as I read in the Julia standard library for linear algebra, decompositions are implemented by calling LAPACK, which I thought was supposed to be very good, so I'm confused as to why the Julia code is running so much slower. Does anyone know why this is the case? Is it doing some kind of balancing or array-symmetry-detection that Mathematica doesn't do? Or is it actually slower?

Also, this is a syntax question and probably a silly error, but how do you change the balancing in Julia? I tried

@time (E,F)=eig(D[, balance=:nobalance]);

exactly as copied and pasted from the Julia manual, but it just gave a syntax error, so something's wrong.

I am using Windows 7 64-bit, with Julia version 0.2.0 64-bit, installed using the instructions at Steven Johnson's site, with Anaconda installed first to take care of prerequisites. I am using Mathematica student edition version 9.0.1.

EDIT 1:

Executing versioninfo() yielded

Julia Version 0.2.0
Commit 05c6461 (2013-11-16 23:44 UTC)
Platform Info:
System: Windows (x86_64-w64-mingw32)
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY)
LAPACK: libopenblas
LIBM: libopenlibm

So it looks like I'm using the openBLAS for LAPACK and BLAS. Once I get the Mathematica implementation info I will add that as well.

EDIT 2:

It appears that Windows Mathematica probably uses Intel MKL BLAS.

like image 282
DumpsterDoofus Avatar asked Feb 08 '14 03:02

DumpsterDoofus


1 Answers

The eigen calculation in Julia is outsourced to LAPACK and BLAS, and I think it is also the case for Mathematica. Julia can use different versions of BLAS and LAPACK and you are therefore effectively comparing your choice of LAPACK and BLAS for Julia with Mathematica's LAPACK and BLAS (probably Intel MKL).

The default choice for Julia is OpenBLAS which is fast on most architectures and on my machine Julia is faster than Mathematica for the eigen calculation. If you are on Linux and have chosen BLAS and LAPACK from a repo, it is very likely that they are much slower than OpenBLAS.

The option for balancing has recently been added to Julia and mistakenly the option was not added to the function eig, which is only a MATLAB compatible interface to the eigfact function. Writing eigfact(A,balance=:nobalance) should work.

Edit 1: Further investigation has shown that the difference is due to a threading problem in OpenBLAS on Windows. If Julia's BLAS is restricted to one thread the timings are comparable to Mathematica, but if more threads are allowed the calculation slows down. This doesn't seem to be a problem on Mac or Linux, but as mentioned above, in general the performance of OpenBLAS depends on the architecture.

Edit 2: Recently, the balancing option has changed. Balancing can be switched off by writing eigfact(A,permute=false,scale=false).

like image 114
Andreas Noack Avatar answered Oct 13 '22 00:10

Andreas Noack