Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why doesn't Keras need the gradient of a custom loss function?

To my understanding, in order to update model parameters through gradient descend, the algorithm needs to calculate at some point the derivative of the error function E with respect of the output y: dE/dy. Nevertheless, I've seen that if you want to use a custom loss function in Keras, you simply need to define E and you don't need to define its derivative. What am I missing?

Each lost function will have a different derivative, for example:

If loss function is the mean square error: dE/dy = 2(y_true - y)

If loss function is cross entropy: dE/dy = y_true/y

Again, how is it possible that the model does not ask me what the derivative is? How does the model calculate the gradient of the loss function with respect of parameters from just the value of E?

Thanks

like image 342
user2696794 Avatar asked Jan 12 '18 03:01

user2696794


People also ask

How do I pass custom loss function in Keras?

Creating custom loss functions in Keras A custom loss function can be created by defining a function that takes the true values and predicted values as required parameters. The function should return an array of losses. The function can then be passed at the compile stage.

What is Categorical_crossentropy in Keras?

categorical_crossentropy: Used as a loss function for multi-class classification model where there are two or more output labels. The output label is assigned one-hot category encoding value in form of 0s and 1. The output label, if present in integer form, is converted into categorical encoding using keras.

What is the loss in Keras?

Loss: A scalar value that we attempt to minimize during our training of the model. The lower the loss, the closer our predictions are to the true labels. This is usually Mean Squared Error (MSE) as David Maust said above, or often in Keras, Categorical Cross Entropy.


1 Answers

To my understanding, as long as each operator that you will use in your Error function has already a predefined gradient. the underlying framework will manage to calculate the gradient of you loss function.

like image 52
Zyx Avatar answered Oct 22 '22 12:10

Zyx