Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Matlab lsqnonlin in Python

I have been working through the Hartley and Zisserman multiple view geometry text and have implemented the gold standard algorithm for computing the Fundamental matrix. This requires solving a non-linear minimization problem using Levenberg-Marquardt.

I implemented this with scipy.optimize.least_squares, but the performance is orders of magnitude slower that similar (e.g., same functionality) matlab code that uses lsqnonlin. In neither case am I supplying the Jacobian or a mask of the sparsity of the Jacobian.

With respect to compute times, this is true for the range of available scipy solvers. I wonder if an alternative exists that has similar performance (numerical & speed) to matlab or if moving to a wrapped, compiled solver is going to be necessary?

Edit for the code request comment. I am trying to limit the total amount of code inserted.

Matlab:

P2GS = lsqnonlin(@(h)ReprojErrGS(corres1,PF1,corres2,h),PF2); 

function REGS = ReprojErrGS(corres1,PF1,corres2,PF2)
   %Find estimated 3D point by Triangulation method
   XwEst = TriangulationGS(corres1,PF1,corres2,PF2);
   %Reprojection Back to the image
   x1hat = PF1*XwEst;
   x1hat = x1hat ./ repmat(x1hat(3,:),3,1);
   x2hat = PF2*XwEst;
   x2hat = x2hat ./ repmat(x2hat(3,:),3,1);
   %Find root mean squared distance error
   dist = ((corres1 - x1hat).*(corres1 - x1hat))  +  ((corres2 - x2hat).*    (corres2 - x2hat));
   REGS = sqrt(sum(sum(dist)) / size(corres1,2));           

Triangulation is the standard method, iterating over all points, setting up Ax=0 and solving using SVD.

Python:

# Using 'trf' for performance, swap to 'lm' for levenberg-marquardt
result = optimize.least_squares(projection_error, p1.ravel(), args=(p, pt.values, pt1.values), method='trf')
# Inputs are pandas dataframe, hence the .values

# Triangulate the correspondences 
xw_est = triangulate(pt, pt1, p, p1)
# SciPy does not like 2d multi-dimensional variables, so reshape

if p1.shape != (3,4):
    p1 = p1.reshape(3,4)

xhat = p.dot(xw_est).T
xhat /= xhat[:,-1][:,np.newaxis]
x2hat = p1.dot(xw_est).T
x2hat /= x2hat[:,-1][:,np.newaxis]
# Compute error
dist = (pt - xhat)**2 + (pt1 - x2hat)**2
reproj_error = np.sqrt(np.sum(dist, axis=1) / len(pt))
# print(reproj_error)
return reproj_error

This should be fully vectorized. Triangulation is as above. I can add that could but would likely link a gist to keep the question size managable.

like image 609
Jzl5325 Avatar asked Oct 18 '22 09:10

Jzl5325


1 Answers

least_squares is very new. As of Fall 2015, there were no alternatives in SciPy land. Otherwise, there's e.g. Ceres.

There surely are many opportunities to speed up least_squares --- pull requests are gladly accepted :-). The first thing to check is that SciPy is linked to a decent LAPACK implementation though.

like image 122
ev-br Avatar answered Oct 21 '22 04:10

ev-br