Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I approximate the Jacobian and Hessian of a function numerically?

I have a function in Python:

def f(x):
    return x[0]**3 + x[1]**2 + 7 
    # Actually more than this.
    # No analytical expression

It's a scalar valued function of a vector.

How can I approximate the Jacobian and Hessian of this function in numpy or scipy numerically?

like image 284
nickponline Avatar asked Dec 13 '12 08:12

nickponline


People also ask

How do you approximate the Hessian?

An approximation of the Hessian is obtained from the analytical formulation by partially solving first-order sensitivities to reduce computational time, while neglecting second-order sensitivities to ease implementation.

What is the relationship between Jacobian matrix and Hessian matrix?

Thus, the Hessian matrix is the matrix with the second-order partial derivatives of a function. On the other hand, the matrix with the first-order partial derivatives of a function is the Jacobian matrix.

Is Jacobian and Hessian same?

The Hessian is symmetric if the second partials are continuous. The Jacobian of a function f : n → m is the matrix of its first partial derivatives. Note that the Hessian of a function f : n → is the Jacobian of its gradient.

How do you find the Hessian of a matrix?

Calculating the Hessian matrix. The minimum is the point x=0, y=0. For a purely quadratic function like this one, the Newton-Raphson method finds the minimum in a single step from any point on the surface.


2 Answers

(Updated in late 2017 because there's been a lot of updates in this space.)

Your best bet is probably automatic differentiation. There are now many packages for this, because it's the standard approach in deep learning:

  • Autograd works transparently with most numpy code. It's pure-Python, requires almost no code changes for typical functions, and is reasonably fast.
  • There are many deep-learning-oriented libraries that can do this. Some of the most popular are TensorFlow, PyTorch, Theano, Chainer, and MXNet. Each will require you to rewrite your function in their kind-of-like-numpy-but-needlessly-different API, and in return will give you GPU support and a bunch of deep learning-oriented features that you may or may not care about.
  • FuncDesigner is an older package I haven't used whose website is currently down.

Another option is to approximate it with finite differences, basically just evaluating (f(x + eps) - f(x - eps)) / (2 * eps) (but obviously with more effort put into it than that). This will probably be slower and less accurate than the other approaches, especially in moderately high dimensions, but is fully general and requires no code changes. numdifftools seems to be the standard Python package for this.

You could also attempt to find fully symbolic derivatives with SymPy, but this will be a relatively manual process.

like image 93
Danica Avatar answered Oct 05 '22 23:10

Danica


Restricted to just SciPy, the most convenient way I found was scipy.misc.derivative, within the appropriate loops, with lambdas to curry the function of interest.

like image 39
Evgeni Sergeev Avatar answered Oct 05 '22 22:10

Evgeni Sergeev