I know how to solve A.X = B by least squares using Python:
Example:
A=[[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,0,0]]
B=[1,1,1,1,1]
X=numpy.linalg.lstsq(A, B)
print X[0]
# [ 5.00000000e-01 5.00000000e-01 -1.66533454e-16 -1.11022302e-16]
But what about solving this same equation with a weight matrix not being Identity:
A.X = B (W)
Example:
A=[[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,0,0]]
B=[1,1,1,1,1]
W=[1,2,3,4,5]
Each term in the weighted least squares criterion includes an additional weight, that determines how much each observation in the data set influences the final parameter estimates and it can be used with functions that are either linear or nonlinear in the parameters.
Like all of the least squares methods discussed so far, weighted least squares is an efficient method that makes good use of small data sets. It also shares the ability to provide different types of easily interpretable statistical intervals for estimation, prediction, calibration and optimization.
I don't know how you have defined your weights, but you could try this if appropriate:
import numpy as np
A=np.array([[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,0,0]])
B = np.array([1,1,1,1,1])
W = np.array([1,2,3,4,5])
Aw = A * np.sqrt(W[:,np.newaxis])
Bw = B * np.sqrt(W)
X = np.linalg.lstsq(Aw, Bw)
I found another approach (using W as a diagonal matrix, and matricial products) :
A=[[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,0,0]]
B = [1,1,1,1,1]
W = [1,2,3,4,5]
W = np.sqrt(np.diag(W))
Aw = np.dot(W,A)
Bw = np.dot(B,W)
X = np.linalg.lstsq(Aw, Bw)
Same values and same results.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With