I have n
real number variables (don't know, don't really care), let's call them X[n]
.
I also have m >> n
relationships between them let's call them R[m]
, of the form:
X[i] = alpha*X[j]
, alpha
is a nonzero positive real number, i
and j
are distinct but the (i, j)
pair is not necessarily unique (i.e. there can be two relationships between the same variables with a different alpha factor)
What I'm trying to do is find a set of alpha
parameters that solve the overdetermined system in some least squares sense. The ideal solution would be to minimize the squared sum of differences between each equation parameter and it's chosen value, but I'm satisfied with the following approximation:
If I turn the m equations into an overdetermined system of n unknowns, any pseudo-inverse based numeric solver will give me the obvious solution (all zeroes). So what I currently do is add another equation into the mix, x[0] = 1
(actually any constant will do) and solve the generated system in the least squares sense using the Moore-Penrose pseudo-inverse. While this tries to minimize the sum of (x[0] - 1)^2
and the square sum of x[i] - alpha*x[j]
, I find it a good and numerically stable approximation to my problem. Here is an example:
a = 1
a = 2*b
b = 3*c
a = 5*c
in Octave:
A = [
1 0 0;
1 -2 0;
0 1 -3;
1 0 -5;
]
B = [1; 0; 0; 0]
C = pinv(A) * B or better yet:
C = pinv(A)(:,1)
Which yields the values for a
, b
, c
: [0.99383; 0.51235; 0.19136]
Which gives me the following (reasonable) relationships:
a = 1.9398*b
b = 2.6774*c
a = 5.1935*c
So right now I need to implement this in C / C++ / Java, and I have the following questions:
Is there a faster method to solve my problem, or am I on the right track with generating the overdetermined system and computing the pseudo-inverse?
My current solution requires a singular value decomposition and three matrix multiplications, which is a little much considering m
can be 5000 or even 10000. Are there faster ways to compute the pseudo-inverse (actually, I only need the first column of it, not the entire matrix given that B is zero except for the first row) given the sparsity of the matrix (each row contains exactly two non-zero values, one of which is always one and the other is always negative)
What math libraries would you suggest to use for this? Is LAPACK ok?
I'm also open to any other suggestions, provided that they are numerically stable and asymptotically fast (let's say k*n^2
, where k
can be large).
Your problem is ill-posed. If you treat the problem as a function of n variables, (The least square of the difference), the function has exactly ONE global minimum hyperplane.
That global minimum will always contain zero unless you fix one of the variables to be nonzero, or reduce the function domain in some other way.
If what you want is a paramaterization of the solution hyperplane, you can get that from the Moore-Penrose Pseudo-Inverse http://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_pseudoinverse and check the section on obtaining all solutions.
(Please note i've used the word "hyperplane" in a technically incorrect manner. I mean a "flat" unbounded subset of your parameter space... A line, a plane, something that can be paramaterized by one or more vectors. For some reason i can't find the general noun for such objects)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With