I got stucked at a point where I am implementing gradient descent in python.
The formula for gradient descent is:
for iter in range(1, num_iters):
hypo_function = np.sum(np.dot(np.dot(theta.T, X)-y, X[:,iter]))
theta_0 = theta[0] - alpha * (1.0 / m) * hypo_function
theta_1 = theta[1] - alpha * (1.0 / m) * hypo_function
Got an error:
---> hypo_function = np.sum(np.dot(np.dot(theta.T, X)-y, X[:,iter])) ValueError: shapes (1,97) and (2,) not aligned: 97 (dim 1) != 2 (dim 0)
PS: Here my X is (2L, 97L), y is (97L,) theta is (2L,).
np.dot(a,b) takes the inner product of a and b if a and b are vectors (1-D arrays) If a and b are 2D arrays, np.dot(a,b) does matrix multiplication.
It will throw ValueError if there is a mismatch between the size of the last dimension of a and the second to last dimension of b. They have to match.
In your case you are trying to multiply a something by 97 array by a 2 by something array in one of your dot products, so there is mismatch. So you need to fix your input data so the dot product/matrix multiply is computable.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With