Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Numerical recipes / Multidimensional root search (using newt) : How to minimize the maximum error

This question is related to the "numerical recipes in C++" book, so it will be reserved to people knowing a little about it as well as about multidimensional optimization.

I am writing a program that needs to search for a multidimensional root, and in order to solve it, I am using the multidimensional newton root finding method, namely the "newt" procedure.

For those interested in the details, I am trying to fit a deformable 3D model to a steresocopic view of an object, based on a few feature points (feature points which are seen by two cameras).

For this, I am using the newt procedure with the following :

  • 11 Input parameters : my deformable model can be modeled with 11 parameters (composed of 5 geometric parameters and 6 deegres of freedom for the 3D object location) :
  • 14 Output parameters for which I need to find the root : based on feature points which are identified by the camera, and given a set on "input parameters", I can calculate a set of distances between the feature points seen by the camera and their theoretical location. I have 7 of those points, so that gives me 14 parameters (7 distances times 2, since I calculate the distances on both cameras)

My problem is that I have more output parameters (14) than input parameters (11) : whenever I call "newt", the algorithm always converges, however it will find a solution that minimizes almost perfectly the 11 first output parameters, but that has lots of errors on the 3 remaining parameters.

However I would like the errors to be uniformly divided among the output parameters.

I already tried the approaches described below :

  1. Try to combine the 14 output parameters into 11 parameter (for example, you take the average of some distances, instead of using both distances). However I am not 100% satisfied with this approach
  2. Mix several solutions with the following principle :
    • Call mnewt and memorize the found root
    • Change the order of the 14 output parameter
    • Calling mnewt again and memorize the found root
    • Compute a solution is the average of the two found roots

Does anyone know of a more generic approach, in which the root finding algorithm would favor an error that is uniformly divided among the output parameters, instead of favoring the first parameters?

like image 888
Pascal T. Avatar asked Nov 05 '22 12:11

Pascal T.


1 Answers

You try to minimize F(x) by solving f(x)=0 where x is an m-dimensional vector and f maps this to an n-dimensional vector. Your optimization problem is underdetermined if m < n (in your case 11 < 14).

For such systems, the generic way to solve them is to minimize a vector norm on x. You can do this by minimizing the system x^T A x + c f(x)^T f(x) with respect to both x and the Lagrange-multiplier c. Without further information, you could take A to be the nxn identity matrix. This will find the x that solves f(x)=0 while having the smallest norm.

For more details on doing this with Newton's method, see e.g. this paper.

like image 179
TemplateRex Avatar answered Nov 15 '22 08:11

TemplateRex