Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to find all zeros of a function using numpy (and scipy)?

Suppose I have a function f(x) defined between a and b. This function can have many zeros, but also many asymptotes. I need to retrieve all the zeros of this function. What is the best way to do it?

Actually, my strategy is the following:

  1. I evaluate my function on a given number of points
  2. I detect whether there is a change of sign
  3. I find the zero between the points that are changing sign
  4. I verify if the zero found is really a zero, or if this is an asymptote

    U = numpy.linspace(a, b, 100) # evaluate function at 100 different points
    c = f(U)
    s = numpy.sign(c)
    for i in range(100-1):
        if s[i] + s[i+1] == 0: # oposite signs
            u = scipy.optimize.brentq(f, U[i], U[i+1])
            z = f(u)
            if numpy.isnan(z) or abs(z) > 1e-3:
                continue
            print('found zero at {}'.format(u))
    

This algorithm seems to work, except I see two potential problems:

  1. It will not detect a zero that doesn't cross the x axis (for example, in a function like f(x) = x**2) However, I don't think it can occur with the function I'm evaluating.
  2. If the discretization points are too far, there could be more that one zero between them, and the algorithm could fail finding them.

Do you have a better strategy (still efficient) to find all the zeros of a function?


I don't think it's important for the question, but for those who are curious, I'm dealing with characteristic equations of wave propagation in optical fiber. The function looks like (where V and ell are previously defined, and ell is an positive integer):

def f(u):
    w = numpy.sqrt(V**2 - u**2)

    jl = scipy.special.jn(ell, u)
    jl1 = scipy.special.jnjn(ell-1, u)
    kl = scipy.special.jnkn(ell, w)
    kl1 = scipy.special.jnkn(ell-1, w)

    return jl / (u*jl1) + kl / (w*kl1)
like image 601
Charles Brunet Avatar asked Feb 14 '13 15:02

Charles Brunet


2 Answers

Why are you limited to numpy? Scipy has a package that does exactly what you want:

http://docs.scipy.org/doc/scipy/reference/optimize.nonlin.html

One lesson I've learned: numerical programming is hard, so don't do it :)


Anyway, if you're dead set on building the algorithm yourself, the doc page on scipy I linked (takes forever to load, btw) gives you a list of algorithms to start with. One method that I've used before is to discretize the function to the degree that is necessary for your problem. (That is, tune \delta x so that it is much smaller than the characteristic size in your problem.) This lets you look for features of the function (like changes in sign). AND, you can compute the derivative of a line segment (probably since kindergarten) pretty easily, so your discretized function has a well-defined first derivative. Because you've tuned the dx to be smaller than the characteristic size, you're guaranteed not to miss any features of the function that are important for your problem.

If you want to know what "characteristic size" means, look for some parameter of your function with units of length or 1/length. That is, for some function f(x), assume x has units of length and f has no units. Then look for the things that multiply x. For example, if you want to discretize cos(\pi x), the parameter that multiplies x (if x has units of length) must have units of 1/length. So the characteristic size of cos(\pi x) is 1/\pi. If you make your discretization much smaller than this, you won't have any issues. To be sure, this trick won't always work, so you may need to do some tinkering.

like image 155
BenDundee Avatar answered Sep 28 '22 09:09

BenDundee


The main problem I see with this is if you can actually find all roots --- as have already been mentioned in comments, this is not always possible. If you are sure that your function is not completely pathological (sin(1/x) was already mentioned), the next one is what's your tolerance to missing a root or several of them. Put differently, it's about to what length you are prepared to go to make sure you did not miss any --- to the best of my knowledge, there is no general method to isolate all the roots for you, so you'll have to do it yourself. What you show is a reasonable first step already. A couple of comments:

  • Brent's method is indeed a good choice here.
  • First of all, deal with the divergencies. Since in your function you have Bessels in the denominators, you can first solve for their roots -- better look them up in e.g., Abramovitch and Stegun (Mathworld link). This will be a better than using an ad hoc grid you're using.
  • What you can do, once you've found two roots or divergencies, x_1 and x_2, run the search again in the interval [x_1+epsilon, x_2-epsilon]. Continue until no more roots are found (Brent's method is guaranteed to converge to a root, provided there is one).
  • If you cannot enumerate all the divergencies, you might want to be a little more careful in verifying a candidate is indeed a divergency: given x don't just check that f(x) is large, check that, e.g. |f(x-epsilon/2)| > |f(x-epsilon)| for several values of epsilon (1e-8, 1e-9, 1e-10, something like that).
  • If you want to make sure you don't have roots which simply touch zero, look for the extrema of the function, and for each extremum, x_e, check the value of f(x_e).
like image 38
ev-br Avatar answered Sep 28 '22 09:09

ev-br