This question is related to the "numerical recipes in C++" book, so it will be reserved to people knowing a little about it as well as about multidimensional optimization.
I am writing a program that needs to search for a multidimensional root, and in order to solve it, I am using the multidimensional newton root finding method, namely the "newt" procedure.
For those interested in the details, I am trying to fit a deformable 3D model to a steresocopic view of an object, based on a few feature points (feature points which are seen by two cameras).
For this, I am using the newt procedure with the following :
My problem is that I have more output parameters (14) than input parameters (11) : whenever I call "newt", the algorithm always converges, however it will find a solution that minimizes almost perfectly the 11 first output parameters, but that has lots of errors on the 3 remaining parameters.
However I would like the errors to be uniformly divided among the output parameters.
I already tried the approaches described below :
Does anyone know of a more generic approach, in which the root finding algorithm would favor an error that is uniformly divided among the output parameters, instead of favoring the first parameters?
You try to minimize F(x)
by solving f(x)=0
where x
is an m-dimensional vector and f
maps this to an n-dimensional vector. Your optimization problem is underdetermined if m < n
(in your case 11 < 14).
For such systems, the generic way to solve them is to minimize a vector norm on x
. You can do this by minimizing the system x^T A x + c f(x)^T f(x)
with respect to both x
and the Lagrange-multiplier c
. Without further information, you could take A to be the nxn identity matrix. This will find the x
that solves f(x)=0
while having the smallest norm.
For more details on doing this with Newton's method, see e.g. this paper.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With