I have a function containing:
Independent variable X,
Dependent variable Y
Two fixed parameters a and b.
Using identical experimental data, both the curve_fit
and leastsq
functions could be fitted to the function with similar results.
Using curve_fit
I have:
[ 2.50110215e-04 , 7.80730380e-05]
for fixed parameters a and b.
Using leastsq
I have:
[ 2.50110267e-04 , 7.80730843e-05]
for fixed parameters a and b.
I would like to know if are there any differences in both of them, if so, what are the what situations should I use curve_fit
and what situation should I use leastsq
?
leastsq. Minimize the sum of squares of a set of equations. Should take at least one (possibly length N vector) argument and returns M floating point numbers.
The SciPy API provides a 'curve_fit' function in its optimization library to fit the data with a given function. This method applies non-linear least squares to fit the data and extract the optimal parameters out of it.
The SciPy open source library provides the curve_fit() function for curve fitting via nonlinear least squares. The function takes the same input and output data as arguments, as well as the name of the mapping function to use. The mapping function must take examples of input data and some number of arguments.
popt : array. Optimal values for the parameters so that the sum of the squared error of f(xdata, *popt) - ydata is minimized. pcov : 2d array. The estimated covariance of popt. The diagonals provide the variance of the parameter estimate.
curve-fit
is using leastsq
for the calculation, so they should always give the same result. The miniscule difference you see there is probably a result of rounding error somewhere. calling leastsq
directly should eliminate that.
From the docs of curve_fit:
The algorithm uses the Levenberg-Marquardt algorithm through leastsq. Additional keyword arguments are passed directly to that algorithm.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With