I am trying to test the speed of Julia ODE solvers. I used the Lorenz equation in the tutorial:
using DifferentialEquations
using Plots
function lorenz(t,u,du)
du[1] = 10.0*(u[2]-u[1])
du[2] = u[1]*(28.0-u[3]) - u[2]
du[3] = u[1]*u[2] - (8/3)*u[3]
end
u0 = [1.0;1.0;1.0]
tspan = (0.0,100.0)
prob = ODEProblem(lorenz,u0,tspan)
sol = solve(prob,reltol=1e-8,abstol=1e-8,saveat=collect(0:0.01:100))
Loading the packages took about 25 s in the beginning, and the code ran for 7 s on a windows 10 quad core laptop in Jupyter notebook. I understand that Julia need to precompile packages first, and is that why the loading time was so long? I found 25 s unbearable. Also, when I ran the solver again using different initial values it took much less time (~1s) to run, and why is that? Is this the typical speed?
The download and installation of DifferentialEquations.jl will happen on the first invocation of diffeqr::diffeq_setup (). Currently, use from R supported a subset of DifferentialEquations.jl which is documented through CRAN. You can access extra tutorials supplied in the DiffEqTutorials.jl repository via the commands:
Solving differential equations with different methods from different languages and packages can be done by changing one line of code, allowing for easy benchmarking to ensure you are using the fastest method possible. DifferentialEquations.jl integrates with the Julia package sphere with:
All Julia functions specialize on the types that are given, and each function is a separate type, so DiffEq's internal functions are specializing on each ODE function you give. In most cases with long computations, (5) doesn't actually matter since you aren't changing functions that often (if you are, consider changing parameters instead).
Using diffeqpy requires that Julia is installed and in the path, along with DifferentialEquations.jl and PyCall.jl. To install Julia, download a generic binary from the JuliaLang site and add it to your path. To install Julia packages required for diffeqpy, open up Python interpreter then run: and you're good!
Tl;dr:
using
calls quicker, at the cost of the first one storing some compilation data. This is only triggered each package update.using
has to pull in the package which takes a little bit (dependent on how much can precompile).This really isn't a DifferentialEquations.jl thing, this is just a Julia package thing. 25s would have to be including the precompilation time. The first time you load a Julia package it precompiles. Then that doesn't need to happen again until the next update. That's probably the longest initialization and it is quite long for DifferentialEquations.jl, but again that only happens each time you update the package code. Then, each time there's a small initialization cost for using
. DiffEq is quite large, so it does take a bit to initialize:
@time using DifferentialEquations
5.201393 seconds (4.16 M allocations: 235.883 MiB, 4.09% gc time)
Then as noted in the comments you also have:
@time using Plots
6.499214 seconds (2.48 M allocations: 140.948 MiB, 0.74% gc time)
Then, the first time you run
function lorenz(t,u,du)
du[1] = 10.0*(u[2]-u[1])
du[2] = u[1]*(28.0-u[3]) - u[2]
du[3] = u[1]*u[2] - (8/3)*u[3]
end
u0 = [1.0;1.0;1.0]
tspan = (0.0,100.0)
prob = ODEProblem(lorenz,u0,tspan)
@time sol = solve(prob,reltol=1e-8,abstol=1e-8,saveat=collect(0:0.01:100))
6.993946 seconds (7.93 M allocations: 436.847 MiB, 1.47% gc time)
But then the second and third time:
0.010717 seconds (72.21 k allocations: 6.904 MiB)
0.011703 seconds (72.21 k allocations: 6.904 MiB)
So what's going on here? The first time Julia runs a function, it will compile it. So the first time you run solve
, it will compile all of its internal functions as it runs. All of the proceeding times will be without the compilation. DifferentialEquations.jl also specializes on the function itself, so if we change the function:
function lorenz2(t,u,du)
du[1] = 10.0*(u[2]-u[1])
du[2] = u[1]*(28.0-u[3]) - u[2]
du[3] = u[1]*u[2] - (8/3)*u[3]
end
u0 = [1.0;1.0;1.0]
tspan = (0.0,100.0)
prob = ODEProblem(lorenz2,u0,tspan)
we will incur some of the compilation time again:
@time sol =
solve(prob,reltol=1e-8,abstol=1e-8,saveat=collect(0:0.01:100))
3.690755 seconds (4.36 M allocations: 239.806 MiB, 1.47% gc time)
So that's the what, now the why. There's a few things together here. First of all, Julia packages do not fully precompile. They don't keep the cached compiled versions of actual methods between sessions. This is something that is on the 1.x release list to do, and this would get rid of that first hit, similar to just calling a C/Fortran package since it would just be hitting a lot of ahead of time (AOT) compiled functions. So that'll be nice, but for now just note that there is a startup time.
Now let's talk about changing the functions. Every function in Julia automatically specializes on its arguments (see this blog post for details). The key idea here is that every function in Julia is a separate concrete type. So, since the problem type here is parameterized, changing the function triggers compilation. Note it's that relation: you can change parameters of the function (if you had parameters), you can change the initial conditions, etc., but it's only changing the type that triggers recompilation.
Is it worth it? Well, maybe. We want to specialize to have things fast for calculations which are difficult. Compilation time is constant (i.e. you can solve a 6 hour ODE and it'll still be a few seconds), and so the computationally-costly calculations aren't effected here. Monte Carlo simulations where you're running thousands of parameters and initial conditions aren't effected here because if you're just changing values of initial conditions and parameters then it won't recompile. But interactive use where you are changing functions does get a second or so hit in there, which isn't nice. One answer from the Julia devs for this is to spend post Julia 1.0 time speeding up compilation times, which is something that I don't know the details of but I am assured there's some low hanging fruit here.
Can we get rid of it? Yes. DiffEq Online doesn't recompile for each function because it's geared towards online use.
function lorenz3(t,u,du)
du[1] = 10.0*(u[2]-u[1])
du[2] = u[1]*(28.0-u[3]) - u[2]
du[3] = u[1]*u[2] - (8/3)*u[3]
nothing
end
u0 = [1.0;1.0;1.0]
tspan = (0.0,100.0)
f = NSODEFunction{true}(lorenz3,tspan[1],u0)
prob = ODEProblem{true}(f,u0,tspan)
@time sol = solve(prob,reltol=1e-8,abstol=1e-8,saveat=collect(0:0.01:100))
1.505591 seconds (860.21 k allocations: 38.605 MiB, 0.95% gc time)
And now we can change the function and not incur compilation cost:
function lorenz4(t,u,du)
du[1] = 10.0*(u[2]-u[1])
du[2] = u[1]*(28.0-u[3]) - u[2]
du[3] = u[1]*u[2] - (8/3)*u[3]
nothing
end
u0 = [1.0;1.0;1.0]
tspan = (0.0,100.0)
f = NSODEFunction{true}(lorenz4,tspan[1],u0)
prob = ODEProblem{true}(f,u0,tspan)
@time sol =
solve(prob,reltol=1e-8,abstol=1e-8,saveat=collect(0:0.01
:100))
0.038276 seconds (242.31 k allocations: 10.797 MiB, 22.50% gc time)
And tada, by wrapping the function in NSODEFunction
(which is internally using FunctionWrappers.jl) it no longer specializes per-function and you hit the compilation time once per Julia session (and then once that's cached, once per package update). But notice that this has about a 2x-4x cost so I am not sure if it will be enabled by default. We could make this happen by default inside of the problem-type constructor (i.e. no extra specialization by default, but the user can opt into more speed at the cost of interactivity) but I am unsure what the better default is here (feel free to comment on the issue with your thoughts). But it will definitely get documented soon after Julia does its keyword argument changes and so "compilation-free" mode will be a standard way to use it, even if not default.
But just to put it into perspective,
import numpy as np
from scipy.integrate import odeint
y0 = [1.0,1.0,1.0]
t = np.linspace(0, 100, 10001)
def f(u,t):
return [10.0*(u[1]-u[0]),u[0]*(28.0-u[2])-u[1],u[0]*u[1]-(8/3)*u[2]]
%timeit odeint(f,y0,t,atol=1e-8,rtol=1e-8)
1 loop, best of 3: 210 ms per loop
we're looking at whether this interactive convenience should be made a default to be 5x faster instead of 20x faster than SciPy's default here (though our default will usually be much more accurate than the default SciPy uses, but that's data for another time which can be found in the benchmarks or just ask). On one hand it makes sense as ease-of-use, but on the other hand if re-enabling the specialization for long calculations and Monte Carlo isn't known (which is where you really want speed), then lots of people there will take a 2x-4x performance hit which could amount to extra days/weeks of computation. Ehh... tough choices.
So in the end there's a mixture of optimizing choices and some precompilation features missing from Julia that effect the interactivity without effecting the true runtime speed. If you're looking to estimate parameters using some big Monte Carlo, or solve a ton of SDEs, or solve a big PDE, we have that down. That was our first goal and we made sure to hit that as good as possible. But playing around in the REPL does have 2-3 second "gliches" which we also cannot ignore (better than playing around in C/Fortran though of course, but still not ideal for a REPL). For this, I've shown you that there's solutions already being developed and tested, and so hopefully this time next year we can have a better answer for that specific case.
Two other things to note. If you're only using the ODE solvers, you can just do using OrdinaryDiffEq
to keep downloading/installing/compiling/importing all of DifferentialEquations.jl (this is described in the manual). Also, using saveat
like that probably isn't the fastest way to solve this problem: solving it with a lot less points and using the dense output as necessary may be better here.
I opened an issue detailing how we can reduce the "between function" compilation time without losing the speedup that specializing gives. I think this is something we can make a short-term priority since I agree that we could do better here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With