I have been struggling for a few days now to figure out how to port code from MatLab or Python to C++...
My problem is with porting a Non-Linear Least Squares optimization function and I have basically hit a roadblock since everything so far is working perfectly.
MathLab implementation: https://github.com/SuwoongHeo/Deformation-Transfer-Matlab
Python implementation: https://github.com/ziyeshanwai/python-deformation-transfer
The exact bit of code I`m having trouble porting ...
The MatLab Code is:
[b, res, resi] = lsqnonlin(@(b) resSimXform( b,double(A),double(B) ),(double(b0)),[],[],options);
function result = resSimXform( b,A,B )
r = b(1:4);
t = b(5:7);
s = b(8);
% s = b(8:10);
n = size(A,2);
if ~isreal(r)
a = 1;
end
R = vrrotvec2mat(r);
test =repmat(t', 1, n);
rot_A = diag(s) * R * A + repmat(t', 1, n);
result = sum(sum((B-rot_A).^2,2));
end
and the equivalent python code is:
b = least_squares(fun=resSimXform, x0=b0, jac='3-point', method='lm', args=(Points_A, Points_B),
ftol=1e-12, xtol=1e-12, gtol=1e-12, max_nfev=100000)
def resSimXform(b, A, B):
print("resSimXform function")
t = b[4:7]
R = np.zeros((3, 3))
R = R_axis_angle(R, b[0:3], b[3])
rot_A = b[7]*R.dot(A) + t[:, np.newaxis]
result = np.sqrt(np.sum((B-rot_A)**2, axis=0))
return result
I have tried various optimizing libraries in C++ with no luck... The way they work is apparently different from the ones in Python or Matlab.
I have tried Eigen's LevenbergMarquardt and DLibs solve_least_squares_lm but failed... I couldn't figure out how to set up them since apparently they are different from their counterparts.
I have already implemented the "Residual of similarity" function(resSimXform) in C++ and it works fine. I`ve been constantly debugging the results and checking against the output from MathLab and Python.
My version in c++ looks something like: ( I'll try to clean it up more when I figure out how the use it with a least squared function...but at the moment it returns the exact value as Matlab )
double residual(MatrixXd x, MatrixXd A, MatrixXd B) {
MatrixXd r(1, 4);
r(0, 0) = x(0, 0);
r(0, 1) = x(0, 1);
r(0, 2) = x(0, 2);
r(0, 3) = x(0, 3);
MatrixXd t(1, 3);
t(0, 0) = x(0, 4);
t(0, 1) = x(0, 5);
t(0, 2) = x(0, 6);
double s = x(0, 7);
MatrixXd R(3, 3);
R = R_axis_angle(r);
MatrixXd rot_A(A.rows(), A.cols());
t.transposeInPlace();
t = t.col(0).replicate(1, A.cols());
rot_A = s * R * A + t;
MatrixXd fvecM = B - rot_A;
for (int i = 0; i < fvecM.rows(); i++) {
for (int j = 0; j < fvecM.cols(); j++) {
fvecM(i, j) = pow(fvecM(i, j), 2);
}
}
Eigen::VectorXd sums(3);
sums(0) = fvecM.row(0).sum();
sums(1) = fvecM.row(1).sum();
sums(2) = fvecM.row(2).sum();
return fvecM.sum();
}
I would greatly appreciate it if anyone out there can help me figure out what optimization library for c++ would work in this scenario and how to implement it... if there are vast differences from the ones in Matlab or Python since my experience with this sort of optimizers is limited.
Thanks in advance... I have already tried googling for a couple of days and couldn't find a solution so that's why I'm here :)
Figured it out with Eigen`s not supported LevenbergMarquardt. You build a functor... my_functor(void): Functor(8, 8) {} .. If you want to solve 8 values... it will have to be 8x8 ...
The operator function... is basically the residual of similarity...
also if your functor function returns only one value.. you need to pass it 8 times otherwise it will not converge if you just pass it like fvec(0) = result
for (int i = 0; i < x.rows(); i++) {
fvec(i) = sum;
}
And then you use the NumericalDiff mode...
Eigen::NumericalDiff<my_functor> numDiff(functor);
Eigen::LevenbergMarquardt<Eigen::NumericalDiff<my_functor>, double> lm(numDiff);
Hope this helps someone...Cheers
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With