Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why the cost function of logistic regression has a logarithmic expression?

cost function for the logistic regression is

cost(h(theta)X,Y) = -log(h(theta)X) or -log(1-h(theta)X) 

My question is what is the base of putting the logarithmic expression for cost function .Where does it come from? i believe you can't just put "-log" out of nowhere. If someone could explain derivation of the cost function i would be grateful. thank you.

like image 521
Nipun Alahakoon Avatar asked Oct 07 '15 07:10

Nipun Alahakoon


People also ask

Why does the cost function of logistic regression have a logarithmic expression?

In order to ensure the cost function is convex (and therefore ensure convergence to the global minimum), the cost function is transformed using the logarithm of the sigmoid function.

What is the cost function of the logistic regression?

The cost function used in Logistic Regression is Log Loss.

Which form of logarithm function is used in logistic regression?

In statistics, the (binary) logistic model (or logit model) is a statistical model that models the probability of one event (out of two alternatives) taking place by having the log-odds (the logarithm of the odds) for the event be a linear combination of one or more independent variables ("predictors").

Why Cannot we use the cost function of linear regression in logistic regression?

Gradient descent requires convex cost functions. Mean Squared Error, commonly used for linear regression models, isn't convex for logistic regression. This is because the logistic function isn't always convex. The logarithm of the likelihood function is however always convex.


2 Answers

This cost function is simply a reformulation of the maximum-(log-)likelihood criterion.

The model of the logistic regression is:

P(y=1 | x) = logistic(θ x) P(y=0 | x) = 1 - P(y=1 | x) = 1 - logistic(θ x) 

The likelihood is written as:

L = P(y_0, ..., y_n | x_0, ..., x_n) = \prod_i P(y_i | x_i) 

The log-likelihood is:

l = log L = \sum_i log P(y_i | x_i) 

We want to find θ which maximizes the likelihood:

max_θ \prod_i P(y_i | x_i) 

This is the same as maximizing the log-likelihood:

max_θ \sum_i log P(y_i | x_i) 

We can rewrite this as a minimization of the cost C=-l:

min_θ \sum_i - log P(y_i | x_i)   P(y_i | x_i) = logistic(θ x_i)      when y_i = 1   P(y_i | x_i) = 1 - logistic(θ x_i)  when y_i = 0 
like image 35
ysdx Avatar answered Sep 21 '22 11:09

ysdx


Source: my own notes taken during Standford's Machine Learning course in Coursera, by Andrew Ng. All credits to him and this organization. The course is freely available for anybody to be taken at their own pace. The images are made by myself using LaTeX (formulas) and R (graphics).

Hypothesis function

Logistic regression is used when the variable y that is wanted to be predicted can only take discrete values (i.e.: classification).

Considering a binary classification problem (y can only take two values), then having a set of parameters θ and set of input features x, the hypothesis function could be defined so that is bounded between [0, 1], in which g() represents the sigmoid function:

enter image description here

This hypothesis function represents at the same time the estimated probability that y = 1 on input x parameterized by θ:

enter image description here

Cost function

The cost function represents the optimization objective.

enter image description here

Although a possible definition of the cost function could be the mean of the Euclidean distance between the hypothesis h_θ(x) and the actual value y among all the m samples in the training set, as long as the hypothesis function is formed with the sigmoid function, this definition would result in a non-convex cost function, which means that a local minimum could be easily found before reaching the global minimum. In order to ensure the cost function is convex (and therefore ensure convergence to the global minimum), the cost function is transformed using the logarithm of the sigmoid function.

enter image description here

This way the optimization objective function can be defined as the mean of the costs/errors in the training set:

enter image description here

like image 136
Peque Avatar answered Sep 23 '22 11:09

Peque