Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

In Scipy how and why does curve_fit calculate the covariance of the parameter estimates

I have been using scipy.optimize.leastsq to fit some data. I would like to get some confidence intervals on these estimates so I look into the cov_x output but the documentation is very unclear as to what this is and how to get the covariance matrix for my parameters from this.

First of all it says that it is a Jacobian, but in the notes it also says that "cov_x is a Jacobian approximation to the Hessian" so that it is not actually a Jacobian but a Hessian using some approximation from the Jacobian. Which of these statements is correct?

Secondly this sentence to me is confusing:

This matrix must be multiplied by the residual variance to get the covariance of the parameter estimates – see curve_fit.

I indeed go look at the source code for curve_fit where they do:

s_sq = (func(popt, *args)**2).sum()/(len(ydata)-len(p0)) pcov = pcov * s_sq 

which corresponds to multiplying cov_x by s_sq but I cannot find this equation in any reference. Can someone explain why this equation is correct? My intuition tells me that it should be the other way around since cov_x is supposed to be a derivative (Jacobian or Hessian) so I was thinking: cov_x * covariance(parameters) = sum of errors(residuals) where sigma(parameters) is the thing I want.

How do I connect the thing curve_fit is doing with what I see at eg. wikipedia: http://en.wikipedia.org/wiki/Propagation_of_uncertainty#Non-linear_combinations

like image 680
HansHarhoff Avatar asked Feb 13 '13 13:02

HansHarhoff


People also ask

How does SciPy curve_fit work?

The SciPy open source library provides the curve_fit() function for curve fitting via nonlinear least squares. The function takes the same input and output data as arguments, as well as the name of the mapping function to use. The mapping function must take examples of input data and some number of arguments.

What does curve_fit return?

The curve_fit function returns two items, which we can popt and pcov .

What is curve_fit SciPy?

The SciPy API provides a 'curve_fit' function in its optimization library to fit the data with a given function. This method applies non-linear least squares to fit the data and extract the optimal parameters out of it.

How is PCOV calculated?

SciPy API. curve_fit is the most convenient, the second return value pcov is just the estimation of the covariance of β^, that is the final result (XT X)-1 Q / (n - p) above. In leastsq , the second return value cov_x is (XT X)-1.


1 Answers

OK, I think I found the answer. First the solution: cov_x*s_sq is simply the covariance of the parameters which is what you want. Taking sqrt of the diagonal elements will give you standard deviation (but be careful about covariances!).

Residual variance = reduced chi square = s_sq = sum[(f(x)-y)^2]/(N-n), where N is number of data points and n is the number of fitting parameters. Reduced chi square.

The reason for my confusion is that cov_x as given by leastsq is not actually what is called cov(x) in other places rather it is the reduced cov(x) or fractional cov(x). The reason it does not show up in any of the other references is that it is a simple rescaling which is useful in numerical computations, but is not relevant for a textbook.

About Hessian versus Jacobian, the documentation is poorly worded. It is the Hessian that is calculated in both cases as is obvious since the Jacobian is zero at a minimum. What they mean is that they are using an approximation to the Jacobian to find the Hessian.

A further note. It seems that the curve_fit result does not actually account for the absolute size of the errors, but only take into account the relative size of the sigmas provided. This means that the pcov returned doesn't change even if the errorbars change by a factor of a million. This is of course not right, but seems to be standard practice ie. Matlab does the same thing when using their Curve fitting toolbox. The correct procedure is described here: https://en.wikipedia.org/wiki/Linear_least_squares_(mathematics)#Parameter_errors_and_correlation

It seems fairly straightforward to do this once the optimum has been found, at least for Linear Least squares.

like image 146
HansHarhoff Avatar answered Oct 08 '22 10:10

HansHarhoff