I want to implement a custom loss function in Python and It should work like this pseudocode:
aux = | Real - Prediction | / Prediction
errors = []
if aux <= 0.1:
errors.append(0)
elif aux > 0.1 & <= 0.15:
errors.append(5/3)
elif aux > 0.15 & <= 0.2:
errors.append(5)
else:
errors.append(2000)
return sum(errors)
I started to define the metric like this:
def custom_metric(y_true,y_pred):
# y_true:
res = K.abs((y_true-y_pred) / y_pred, axis = 1)
....
But I do not know how to get the value of the res for the if and else. Also I want to know what have to return the function.
Thanks
Creating custom loss functions in Keras A custom loss function can be created by defining a function that takes the true values and predicted values as required parameters. The function should return an array of losses. The function can then be passed at the compile stage.
Tensorflow custom loss function numpy To do this task first we will create an array with sample data and find the mean squared value with the numpy() function. Next, we will use the tf. keras. Sequential() function and assign the dense value with input shape.
Also I want to know what have to return the function.
Custom metrics can be passed at the compilation step.
The function would need to take (y_true, y_pred)
as arguments and return a single tensor
value.
But I do not know how to get the value of the res for the if and else.
You can return the result
from result_metric
function.
def custom_metric(y_true,y_pred):
result = K.abs((y_true-y_pred) / y_pred, axis = 1)
return result
The second step is to use a keras
callback function in order to find the sum of the errors.
The callback can be defined and passed to the fit
method.
history = CustomLossHistory()
model.fit(callbacks = [history])
The last step is to create the the CustomLossHistory
class in order to find out the sum
of your expecting errors list.
CustomLossHistory
will inherit some default methods from keras.callbacks.Callback
.
You can read more in the Keras Documentation
But for this example we only need on_train_begin
and on_batch_end
methods.
Implementation
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.errors= []
def on_batch_end(self, batch, logs={}):
loss = logs.get('loss')
self.errors.append(self.loss_mapper(loss))
def loss_mapper(self, loss):
if loss <= 0.1:
return 0
elif loss > 0.1 & loss <= 0.15:
return 5/3
elif loss > 0.15 & loss <= 0.2:
return 5
else:
return 2000
After your model is trained you can access your errors using following statement.
errors = history.errors
I'll take a leap here and say this won't work because it is not differentiable. The loss needs to be continuously differentiable so you can propagate a gradient through there.
If you want to make this work you need to find a way to do this without discontinuity. For example you could try a weighted average over your 4 discrete values where the weights strongly prefer the closest value.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With