Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Vectorize Gradient Descent Numpy

I have implemented this gradient descent in Numpy:

def gradientDescent(X, y, theta, alpha, iterations):
    m = len(y)

    for i in range(iterations):
        h = np.dot(X,theta)
        loss = h-y
        theta = theta - (alpha/m)*np.dot(X.T, loss) #update theta

    return theta

While other parts of the code are completely vectorized here there still a for loop which seems to me impossible to eliminate; specifically requiring at each step the update of theta I don't see how I could be vectorizing it or writing it in a more efficient way.

Thank you for your help

like image 842
Luigi Tiburzi Avatar asked Jan 31 '26 15:01

Luigi Tiburzi


1 Answers

You can't vectorize the for loop, because each iteration is updating state. Vectorization is primarily used when the calculation can be done such that each iteration is calculating an independent (in some sense) result.

like image 175
Nir Friedman Avatar answered Feb 03 '26 09:02

Nir Friedman



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!