Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Find a root of a function in a given range

I have a set of functions f_t with several roots (two actually). I want to find the "first" root and doing this with fsolve works fine most of the time. The problem is, that the two roots converge, as t goes to infinity. (a simple exmple of my functions would be f_t(x) = x^2 - 1/t). So the larger t gets, the more mistakes fsolve makes. Is there a predefined function, similar to fsolve to which I can tell it should only look in a given range (e.g. find always the root in [0, inf)).

The question is essentially the same as https://mathematica.stackexchange.com/questions/91784/how-to-find-numerically-all-roots-of-a-function-in-a-given-range?noredirect=1&lq=1, however the answers there are for Mathematica, I want them in Python.

PS: I now how I can write my own algorithm, but as these tend to be slower as builtins I was hoping to find a builtin that does the same. Especially I've read this post Find root of a function in a given interval

like image 380
Jürg Merlin Spaak Avatar asked Apr 07 '17 06:04

Jürg Merlin Spaak


3 Answers

It is generally accepted that for smooth, well-behaved functions, the Brent method is the fastest method guaranteed to give a root. As with the other two methods listed, you must provide an interval [a,b] across which the function is continuous and changes sign.

The Scipy implementation is documented here. An example use case for the function you mentioned could look like this:

from __future__ import division
import scipy

def func(x,t):
    return(x**2 - 1/t)

t0 = 1
min = 0
max = 100000 # set max to some sufficiently large value

root = scipy.optimize.brentq(func, min, max, args = (t0)) # args just supplies any extra
                                                       # argument for the function that isn't the varied parameter
like image 73
Alex Avatar answered Nov 02 '22 13:11

Alex


You can use scipy.optimize.bisect, which takes two parameters a and b that define the starting interval. There are a few limitations, though:

  • The interval needs to be finite. You cannot search in [0, inf].
  • The function must flip sign at the root (f(a) and f(b) must have opposite signs) so, for example, you cannot find the root of f(x) = abs(x) (If that is even a "root" in the mathematical sense). Also, it won't work for f(x) = x**2 - 1 and an interval of [a, b] with a<-1 and b>1.
  • The method is not gradient based. This can be an advantage if the function is very jagged or expensive to evaluate but it may be slower on other functions.

An alternative is to use scipy.optimize.minimize to minimize abs(f(x)). This function can take bounds that include infinity. But minimization may end up in a non-root local minimum of the function.

like image 43
MB-F Avatar answered Nov 02 '22 15:11

MB-F


Classically, you could use root:

import numpy as np
from scipy.optimize import root

def func(x, t):
    return x ** 2 - 1. / t

t = 5000

res = root(func, 0.5, args=(t, )).x[0]
print res

That would print the positive one, in this case 0.0141421356237.

If you want to specify the range and determine all roots within this interval, you can use chebpy:

from chebpy import chebfun

x = chebfun('x', [-100000, 100000])
t = 5000
f = x ** 2 - 1. / t

rts = f.roots()
print rts

This would print both the positive and the negative root, in this case

[-0.01413648  0.01413648]

If you only want to look in the positive range, you can change

x = chebfun('x', [-100000, 100000])

to

x = chebfun('x', [0, 100000])

I am, however, not sure how to use infinity, but you can just use a very high number for practical purposes, I think.

like image 1
Cleb Avatar answered Nov 02 '22 13:11

Cleb