I am trying to define my own loss function in keras which is Root Mean Squared Percentage Error. RMSPE is defined as :
I have defined my loss function as:
from keras import backend as K
def rmspe(y_true, y_pred):
sum = K.sqrt(K.mean(K.square( (y_true - y_pred) /
K.clip(K.abs(y_true),K.epsilon(),None) ), axis=-1) )
return sum*100.
But after a few iterations it is giving me loss value as nan. Can someone point out what am i doing wrong. Thanks
Creating custom loss functions in Keras A custom loss function can be created by defining a function that takes the true values and predicted values as required parameters. The function should return an array of losses. The function can then be passed at the compile stage.
The purpose of loss functions is to compute the quantity that a model should seek to minimize during training.
Try: rmspe = (np. sqrt(np. mean(np. square((y_true - y_pred) / y_true)))) * 100 Except ZeroDivisionError: print("Oh, no!
We use a loss function to determine how far the predicted values deviate from the actual values in the training data. We change the model weights to make the loss minimum, and that is what training is all about.
It's good that you're clipping the denominator. But epsilon in the tensorflow backend is 1e-7 when I check on my machine. So you can still blow up your gradient by ten million when you divide by the clipped value. What you want to do is clip your gradient which you can do with either the clipvalue
or clipnorm
arguments to your optimizer:
optimizer = SGD(clipvalue=10.0)
or
optimizer = SGD(clipnorm=2.0)
You have to play with the value a bit depending on how many output variables you have and how much noise is in your data.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With