Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Least Squares solution to simultaneous equations

I am trying to fit a transformation from one set of coordinates to another.

x' = R + Px + Qy
y' = S - Qx + Py
Where P,Q,R,S are constants, P = scale*cos(rotation). Q=scale*sin(rotation)

There is a well known 'by hand' formula for fitting P,Q,R,S to a set of corresponding points. But I need to have an error estimate on the fit - so I need a least squares solution.

Read 'Numerical Recipes' but I'm having trouble working out how to do this for data sets with x and y in them.

Can anyone point to an example/tutorial/code sample of how to do this ?
Not too bothered about the language.
But - just use built in feature of Matlab/Lapack/numpy/R probably not helpful !

edit: I have a large set of old(x,y) new(x,y) to fit to. The problem is overdetermined (more data points than unknowns) so simple matrix inversion isn't enough - and as I said I really need the error on the fit.

like image 240
Martin Beckett Avatar asked Aug 07 '09 17:08

Martin Beckett


4 Answers

The following code should do the trick. I used the following formula for the residuals:

residual[i] =   (computed_x[i] - actual_x[i])^2
              + (computed_y[i] - actual_y[i])^2

And then derived the least-squares formulae based on the general procedure described at Wolfram's MathWorld.

I tested out this algorithm in Excel and it performs as expected. I Used a collection of ten random points which were then rotated, translated and scaled by a randomly generated transformation matrix.

With no random noise applied to the output data, this program produces four parameters (P, Q, R, and S) which are identical to the input parameters, and an rSquared value of zero.

As more and more random noise is applied to the output points, the constants start to drift away from the correct values, and the rSquared value increases accordingly.

Here is the code:

// test data
const int N = 1000;
float oldPoints_x[N] = { ... };
float oldPoints_y[N] = { ... };
float newPoints_x[N] = { ... };
float newPoints_y[N] = { ... };

// compute various sums and sums of products
// across the entire set of test data
float Ex =  Sum(oldPoints_x, N);
float Ey =  Sum(oldPoints_y, N);
float Exn = Sum(newPoints_x, N);
float Eyn = Sum(newPoints_y, N);
float Ex2 = SumProduct(oldPoints_x, oldPoints_x, N);
float Ey2 = SumProduct(oldPoints_y, oldPoints_y, N);
float Exxn = SumProduct(oldPoints_x, newPoints_x, N);
float Exyn = SumProduct(oldPoints_x, newPoints_y, N);
float Eyxn = SumProduct(oldPoints_y, newPoints_x, N);
float Eyyn = SumProduct(oldPoints_y, newPoints_y, N);

// compute the transformation constants
// using least-squares regression
float divisor = Ex*Ex + Ey*Ey - N*(Ex2 + Ey2);
float P = (Exn*Ex + Eyn*Ey - N*(Exxn + Eyyn))/divisor;
float Q = (Exn*Ey + Eyn*Ex + N*(Exyn - Eyxn))/divisor;
float R = (Exn - P*Ex - Q*Ey)/N;
float S = (Eyn - P*Ey + Q*Ex)/N;

// compute the rSquared error value
// low values represent a good fit
float rSquared = 0;
float x;
float y;
for (int i = 0; i < N; i++)
{
    x = R + P*oldPoints_x[i] + Q*oldPoints_y[i];
    y = S - Q*oldPoints_x[i] + P*oldPoints_y[i];
    rSquared += (x - newPoints_x[i])^2;
    rSquared += (y - newPoints_y[i])^2;
}
like image 61
e.James Avatar answered Nov 16 '22 13:11

e.James


To find P, Q, R, and S, then you can use least squares. I think the confusing thing is that that usual description of least squares uses x and y, but they don't match the x and y in your problem. You just need translate your problem carefully into the least squares framework. In your case the independent variables are the untransformed coordinates x and y, the dependent variables are the transformed coordinates x' and y', and the adjustable parameters are P, Q, R, and S. (If this isn't clear enough, let me know and I'll post more detail.)

Once you've found P, Q, R, and S, then scale = sqrt(P^2 + Q^2) and you can then find the rotation from sin rotation = Q / scale and cos rotation = P / scale.

like image 42
David Norman Avatar answered Nov 16 '22 12:11

David Norman


You can use the levmar program to calculate this. Its tested and integrated into multiple products including mine. Its licensed under the GPL, but if this is a non-opensource project, he will change the license for you (for a fee)

like image 1
Steve Avatar answered Nov 16 '22 13:11

Steve


Define the 3x3 matrix T(P,Q,R,S) such that (x',y',1) = T (x,y,1). Then compute

A = \sum_i |(T (x_i,y_i,1)) - (x'_i,y'_i,1)|^2

and minimize A against (P,Q,R,S).

Coding this yourself is a medium to large sized project unless you can guarntee that the data are well conditioned, especially when you want good error estimates out of the procedure. You're probably best off using an existing minimizer that supports error estimates..

Particle physics type would use minuit either directly from CERNLIB (with the coding most easily done in fortran77), or from ROOT (with the coding in c++, or it should be accessible though the python bindings). But that is a big installation if you don't have one of these tools already.

I'm sure that others can suggest other minimizers.

like image 1
dmckee --- ex-moderator kitten Avatar answered Nov 16 '22 14:11

dmckee --- ex-moderator kitten