Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Implementation of logistic regression formula in R

I am trying to build my own logistic regression function using stochastic gradient descent in R, but what I have right now makes the weights grow without bound and therefore never halts:

# Logistic regression
# Takes training example vector, output vector, learn rate scalar, and convergence delta limit scalar
my_logr <- function(training_examples,training_outputs,learn_rate,conv_lim) {
  # Initialize gradient vector
  gradient <- as.vector(rep(0,NCOL(training_examples)))
  # Difference between weights
  del_weights <- as.matrix(1)
  # Weights
  weights <- as.matrix(runif(NCOL(training_examples)))
  weights_old <- as.matrix(rep(0,NCOL(training_examples)))

  # Compute gradient
  while(norm(del_weights) > conv_lim) {

    for (k in 1:NROW(training_examples)) {
      gradient <- gradient + 1/NROW(training_examples)*
        ((t(training_outputs[k]*training_examples[k,]
            /(1+exp(training_outputs[k]*t(weights)%*%as.numeric(training_examples[k,]))))))
    }

    # Update weights
    weights <- weights_old - learn_rate*gradient
    del_weights <- as.matrix(weights_old - weights)
    weights_old <- weights

    print(weights)
  }
    return(weights)
}

The function can be tested with the following code:

data(iris) # Iris data already present in R    
# Dataset for part a (first 50 vs. last 100)
iris_a <- iris
iris_a$Species <- as.integer(iris_a$Species)
# Convert list to binary class
for (i in 1:NROW(iris_a$Species)) {if (iris_a$Species[i] != "1") {iris_a$Species[i] <- -1}}    
random_sample <- sample(1:NROW(iris),50)

weights_a <- my_logr(iris_a[random_sample,1:4],iris_a$Species[random_sample],1,.1)

I double-checked my algorithm against Abu-Mostafa's, which is as follows:

  1. Initialize weight vector
  2. For each time step compute gradient:
    gradient <- -1/N * sum_{1 to N} (training_answer_n * training_Vector_n / (1 + exp(training_answer_n * dot(weight,training_vector_n))))
  3. weight_new <- weight - learn_rate*gradient
  4. Repeat until weight delta is sufficiently small

Am I missing something here?

like image 355
bright-star Avatar asked Mar 18 '13 13:03

bright-star


1 Answers

From a mathematical perspective, an unconstrained magnitude on the weight vector does not yield a unique solution. When I added these two lines to the classifier function, it converged in two steps:

# Normalize
weights <- weights/norm(weights)

...

# Update weights
weights <- weights_old - learn_rate*gradient
weights <- weights / norm(weights)

I couldn't make @SimonO101's work, and I'm not using this code for real work (there are builtins like glm), so it's enough to do loops that I understand. The whole function is as follows:

# Logistic regression
# Takes training example vector, output vector, learn rate scalar, and convergence delta limit scalar
my_logr <- function(training_examples,training_outputs,learn_rate,conv_lim) {
  # Initialize gradient vector
  gradient <- as.vector(rep(0,NCOL(training_examples)))
  # Difference between weights
  del_weights <- as.matrix(1)
  # Weights
  weights <- as.matrix(runif(NCOL(training_examples)))
  weights_old <- as.matrix(rep(0,NCOL(training_examples)))

  # Normalize
  weights <- weights/norm(weights)

  # Compute gradient
  while(norm(del_weights) > conv_lim) {

    for (k in 1:NCOL(training_examples)) {
      gradient <- gradient - 1/NROW(training_examples)*
        ((t(training_outputs[k]*training_examples[k,]
            /(1+exp(training_outputs[k]*t(weights)%*%as.numeric(training_examples[k,]))))))
    }
#     gradient <- -1/NROW(training_examples) * sum(training_outputs * training_examples / (1 + exp(training_outputs * weights%*%training_outputs) ) )

    # Update weights
    weights <- weights_old - learn_rate*gradient
    weights <- weights / norm(weights)
    del_weights <- as.matrix(weights_old - weights)
    weights_old <- weights

    print(weights)
  }
    return(weights)
}
like image 122
bright-star Avatar answered Sep 29 '22 10:09

bright-star