I'm having a bit of trouble understanding how this function works.
a, b = scipy.linalg.lstsq(X, w*signal)[0]
I know that signal is the array representing the signal and currently w
is just [1,1,1,1,1...]
How should I manipulate X
or w
to imitate weighted least squares or iteratively reweighted least squared?
If you product X and y with sqrt(weight) you can calculate weighted least squares. You can get the formula by following link:
http://en.wikipedia.org/wiki/Linear_least_squares_%28mathematics%29#Weighted_linear_least_squares
here is an example:
Prepare data:
import numpy as np
np.random.seed(0)
N = 20
X = np.random.rand(N, 3)
w = np.array([1.0, 2.0, 3.0])
y = np.dot(X, w) + np.random.rand(N) * 0.1
OLS:
from scipy import linalg
w1 = linalg.lstsq(X, y)[0]
print w1
output:
[ 0.98561405 2.0275357 3.05930664]
WLS:
weights = np.linspace(1, 2, N)
Xw = X * np.sqrt(weights)[:, None]
yw = y * np.sqrt(weights)
print linalg.lstsq(Xw, yw)[0]
output:
[ 0.98799029 2.02599521 3.0623824 ]
Check result by statsmodels:
import statsmodels.api as sm
mod_wls = sm.WLS(y, X, weights=weights)
res = mod_wls.fit()
print res.params
output:
[ 0.98799029 2.02599521 3.0623824 ]
Create a diagonal matrix W
from the elementwise square-roots of w
. Then I think you just want:
scipy.linalg.lstsq(np.dot(W, X), np.dot(W*signal))
Following http://en.wikipedia.org/wiki/Linear_least_squares_(mathematics)#Weighted_linear_least_squares
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With