Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is my implementation of stochastic gradient descent correct?

I am trying to develop stochastic gradient descent, but I don't know if it is 100% correct.

  • The cost generated by my stochastic gradient descent algorithm is sometimes very far from the one generated by FMINUC or Batch gradient descent.
  • while batch gradient descent cost converge when I set a learning rate alpha of 0.2, I am forced to set a learning rate alpha of 0.0001 for my stochastic implementation for it not to diverge. Is this normal?

Here are some results I obtained with a training set of 10,000 elements and num_iter = 100 or 500

    FMINUC : 
    Iteration  #100 | Cost: 5.147056e-001

    BACTH GRADIENT DESCENT  500 ITER
    Iteration #500 - Cost = 5.535241e-001

    STOCHASTIC GRADIENT DESCENT 100 ITER
    Iteration #100 - Cost = 5.683117e-001  % First time I launched
    Iteration #100 - Cost = 7.047196e-001  % Second time I launched

Gradient descent implementation for logistic regression

J_history = zeros(num_iters, 1); 

for iter = 1:num_iters 

    [J, gradJ] = lrCostFunction(theta, X, y, lambda);
    theta = theta - alpha * gradJ;
    J_history(iter) = J;

    fprintf('Iteration #%d - Cost = %d... \r\n',iter, J_history(iter));
end

Stochastic gradient descent implementation for logistic regression

% number of training examples
m = length(y);

% STEP1 : we shuffle the data
data = [y, X];
data = data(randperm(size(data,1)),:);
y = data(:,1);
X = data(:,2:end);

for iter = 1:num_iters 

     for i = 1:m
        x = X(i,:); % Select one example
        [J, gradJ] = lrCostFunction(theta, x, y(i,:), lambda);
        theta = theta - alpha * gradJ;
     end

     J_history(iter) = J;
     fprintf('Iteration #%d - Cost = %d... \r\n',iter, J);

end

For reference, here is the logistic regression cost function used in my example

function [J, grad] = lrCostFunction(theta, X, y, lambda)

m = length(y); % number of training examples

% We calculate J    
hypothesis = sigmoid(X*theta); 
costFun = (-y.*log(hypothesis) - (1-y).*log(1-hypothesis));    
J = (1/m) * sum(costFun) + (lambda/(2*m))*sum(theta(2:length(theta)).^2);

% We calculate grad using the partial derivatives
beta = (hypothesis-y); 
grad = (1/m)*(X'*beta);
temp = theta;  
temp(1) = 0;   % because we don't add anything for j = 0  
grad = grad + (lambda/m)*temp; 
grad = grad(:);

end
like image 329
alexandrekow Avatar asked Jan 25 '14 14:01

alexandrekow


1 Answers

This is pretty much ok. If you are worried about choosing the appropriate learning rate alpha, you should think about applying a line search method.

Line search is a method which chooses an optimal learning rate for gradient descent at every iteration, which is better than using fixed learning rate throughout the whole optimization process. Optimal value for learning rate alpha is one which locally (from current theta in the direction of the negative gradient) minimizes cost function.

At each iteration of the gradient descent, start from the learning rate alpha = 0 and gradually increase alpha by the fixed step deltaAlpha = 0.01, for example. Recalculate parameters theta and evaluate the cost function. Since the cost function is convex, by increasing alpha (that is, by moving in the direction of negative gradient) cost function will first start decreasing and then (at some moment) increasing. At that moment stop the line search and take the last alpha before cost function started increasing. Now update the parameters theta with that alpha. In case that the cost function never starts increasing, stop at alpha = 1.

Note: For big regularization factors (lambda = 100, lambda = 1000) it is possible that deltaAlpha is too big and that gradient descent diverges. If that is the case, decrease deltaAlpha 10 times (deltaAlpha = 0.001, deltaAlpha = 0.0001) until you get to the appropriate deltaAlpha for which gradient descent converges.

Also, you should think about using some terminating condition other than the number of iterations, e.g. when difference between cost functions in two subsequent iterations becomes small enough (less than some epsilon).

like image 155
Kodin Avatar answered Sep 20 '22 20:09

Kodin