I'm using SymPy version 0.7.3, and encountered some problems when using dsolve function. It seems that dsolve having difficulties when the input equation has too many parameters.
I've tried solving the following equation:
from sympy import *
p = Function('p')
t, u1, u2, u3, u4, u5 = symbols('t u1 u2 u3 u4 u5')
eq = Eq(Derivative(p(t),t), -(u3 + u4)*p(t) + exp(-t*(u1 + u2)))
eq
Out: Derivative(p(t), t) == (-u3 - u4)*p(t) + exp(-t*(u1 + u2))
%time dsolve(eq)
And got:
CPU times: user 213.11 s, sys: 0.00 s, total: 213.11 s
Wall time: 213.12 s
p(t) == (C1 + Piecewise((t*u1/(u1 + u2) + t*u2/(u1 + u2), u3 == u1 + u2 - u4), (-exp(t*u3)*exp(t*u4)/(u1*exp(t*u1)*exp(t*u2) + u2*exp(t*u1)*exp(t*u2) - u3*exp(t*u1)*exp(t*u2) - u4*exp(t*u1)*ex
p(t*u2)), True)))*exp(-t*(u3 + u4))
(Took 213.12 seconds!)
Then I've replaced u1+u2 by u5:
eq = Eq(Derivative(p(t),t), -(u3 + u4)*p(t) + exp(-t*(u1 + u2))).subs(u1+u2, u5)
eq
Out:Derivative(p(t), t) == (-u3 - u4)*p(t) + exp(-t*u5)
%time dsolve(eq)
and got:
CPU times: user 1.62 s, sys: 0.00 s, total: 1.62 s
Wall time: 1.62 s
p(t) == (C1 + Piecewise((t, u3 == -u4 + u5), (exp(t*u3)*exp(t*u4)/(u3*exp(t*u5) + u4*exp(t*u5) - u5*exp(t*u5)), True)))*exp(-t*(u3 + u4))
(Only 1.62 seconds!)
I've tried using different hints but it didn't help...
I've also noticed that in much more complex functions, dsolve crashes, but when replacing some of the constant parameters, it works fast.
Do you know what is the reason for this phenomena? Is there any way to solve this problem automatically?
I noted the same problem with Mathematica, where it is very advantageous to limit the number of symbols in an expression. I think the reason for this is that most symbolic calculation tools work by first mechanically applying a general recipe to solve the problem, and then simplify the result as much as possible.
In the general solution, it may be very difficult for the simplifier to recover that certain symbols occur only in a given combination, and can thus be treated as a single symbol. So the simplifier unnecessarily has to deal with a larger search space. In addition, it has to make sure that it treats all the boundary cases correctly (u1
might be < 0, > 0, = 0, complex, ...?)
I found it very advantageous to do as many simplifications by hand as possible before giving the problem to a symbolic solver. Grouping variables together by hand (like in your example) is one useful technique. Another technique is to work out normalizations, or setting one parameter to 1. For example, when dealing with a polynom a x^2 + b x + c
, most of the time the problem x^2 + B x + C
is equivalent to us. (Since we are actually sure that a != 0
, but have forgotten to tell the solver, right?...) But for the solver it makes a big difference if the number of symbolic variables is reduced by 1.
At some point, these symbolic solvers will certainly become smart enough to group variables together before attempting to solve the problem. However, at present, it seems that human intervention is still required.
On the other hand, it is hard to imagine that a symbolic solver will automatically recognize more complicated transformations that simplify the problem, such as switching from Cartesian to polar coordinates, or doing changes of variables such as l=x+iy
, r=x-iy
, which are not at all obvious but known to be advantageous for certain problems.
Update
It appears the best we can do for this problem is to set a = u3 + u4
and a + b = u1 + u2
. Apart from reducing the number of parameters from 4 to 2, the special case now appears when b == 0
, so we can easily instruct the solver to ignore it:
from sympy import *
p = Function('p')
t, a = symbols('t a')
b = symbols('b', nonzero=True)
eq = Eq(Derivative(p(t),t), -a*p(t) + exp(-(a + b)*t))
dsolve(eq)
# -> p(t) == (C1 - exp(-b*t)/b)*exp(-a*t)
# (0.75 s)
So helping the solver by avoiding the special case cut the solution time in half again (timings on my system were similar to yours). In case that the special case b == 0
is actually relevant, one may easily recover it from the Taylor series expansion of exp(-b*t) ~ 1 - b*t
.
In general, specifying that variables are real, nonzero, strictly greater than zero, etc. are also very useful to avoid getting the solver hung up in special cases. Sometimes it is actually better to solve a problem separately for x < 0
, x > 0
and x == 0
(avoiding the notorious sqrt(x^2)
expressions that the solver cannot simplify further without knowledge of the sign of x
).
The problem is a little clearer if you use the "1st_linear_Integral"
hint, which returns what would be done as unevaluated Integrals (I use 1st_linear
because that is the first method returned by classify_ode(eq)
, meaning it is the one used by dsolve
by default):
In [61]: dsolve(Eq(Derivative(p(t),t), -(u3 + u4)*p(t) + exp(-t*(u1 + u2))), hint='1st_linear_Integral')
Out[61]:
⎛ ⌠ ⎞
⎜ ⎮ ⌠ ⎟ ⌠
⎜ ⎮ ⎮ (u₃ + u₄) dt ⎟ -⎮ (u₃ + u₄) dt
⎜ ⎮ -t⋅u₁ -t⋅u₂ ⌡ ⎟ ⌡
p(t) = ⎜C₁ + ⎮ ℯ ⋅ℯ ⋅ℯ dt⎟⋅ℯ
⎝ ⌡ ⎠
In [62]: dsolve(Eq(Derivative(p(t),t), -(u3 + u4)*p(t) + exp(-t*(u1 + u2))).subs(u1+u2, u5), hint='1st_linear_Integral')
Out[62]:
⎛ ⌠ ⎞
⎜ ⎮ ⌠ ⎟ ⌠
⎜ ⎮ ⎮ (u₃ + u₄) dt ⎟ -⎮ (u₃ + u₄) dt
⎜ ⎮ -t⋅u₅ ⌡ ⎟ ⌡
p(t) = ⎜C₁ + ⎮ ℯ ⋅ℯ dt⎟⋅ℯ
⎝ ⌡ ⎠
The integration algorithm has an issue with exp(a*x)*exp(b*x)
, whereas it has no problem with exp((a + b)*x)
. Basically there is a faster algorithm with a more limited scope that is called first, then a slower algorithm with a larger scope that is called if the fast one fails. The fast one can handle exp((a + b)*x)
but currently not exp(a*x)*exp(b*x)
.
The easy workaround in this specific case is to merge together the exponentials using powsimp
:
In [67]: a = dsolve(Eq(Derivative(p(t),t), -(u3 + u4)*p(t) + exp(-t*(u1 + u2))), hint='1st_linear_Integral')
In [68]: powsimp(a, deep=True)
Out[68]:
⎛ ⌠ ⎞
⎜ ⎮ ⌠ ⎟ ⌠
⎜ ⎮ -t⋅u₁ - t⋅u₂ + ⎮ (u₃ + u₄) dt ⎟ -⎮ (u₃ + u₄) dt
⎜ ⎮ ⌡ ⎟ ⌡
p(t) = ⎜C₁ + ⎮ ℯ dt⎟⋅ℯ
⎝ ⌡ ⎠
In [69]: %time powsimp(a, deep=True).doit()
CPU times: user 261 ms, sys: 2.11 ms, total: 263 ms
Wall time: 262 ms
Out[69]:
⎛ ⎛⎧ t for u₁ + u₂ - u₃ - u₄ = 0⎞⎞
⎜ ⎜⎪ ⎟⎟
⎜ ⎜⎪ -t⋅u₁ - t⋅u₂ + t⋅(u₃ + u₄) ⎟⎟ -t⋅(u₃ + u₄)
p(t) = ⎜C₁ + ⎜⎨-ℯ ⎟⎟⋅ℯ
⎜ ⎜⎪───────────────────────────── otherwise ⎟⎟
⎜ ⎜⎪ u₁ + u₂ - u₃ - u₄ ⎟⎟
⎝ ⎝⎩ ⎠⎠
In general, Stefan's suggestions may or may not hold true. In theory, a CAS shouldn't care how complicated your symbolic constants are, because they are just constants. In reality, there are issues, because it combines the constants, and then it needs to see if things cancel, and so on. Also, little differences like the above can lead to large differences in actual algorithm paths. Mathematically two expressions can be the same, but the algorithms are dependent on how they look structurally. As a general rule of thumb, simpler expressions will tend to do better.
If you find yourself needing to replace subexpressions with symbols a lot, you can use cse
to help.
Assuming your symbols to be real or positive does help a lot too in general, though it's irrelevant in this particular problem.
By the way, SymPy 0.7.3 is a little old. The newest version, 0.7.5, was just released.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With