I am working on a medical dataset where I am trying to have as less false negatives as possible. A prediction of "disease when actually no disease" is okay for me but a prediction "no disease when actually a disease" is not. That is, I am okay with FP
but not FN
.
After doing some research, I found out ways like Keeping higher learning rate for one class
, using class weights
,ensemble learning with specificity/sensitivity
etc.
I achieved the near desired result using class weights like class_weight = {0 : 0.3,1: 0.7}
and then calling the model.fit(class_weights=class_weight)
. This gave me very low FN but a pretty high FP. I am trying to reduce FP as much as possible keeping FN very low.
I am struggling to write a custom loss function using Keras
which will help me to penalize the false negatives. Thanks for the help.
Creating custom loss functions in Keras A custom loss function can be created by defining a function that takes the true values and predicted values as required parameters. The function should return an array of losses. The function can then be passed at the compile stage.
A false positive is an outcome where the model incorrectly predicts the positive class. And a false negative is an outcome where the model incorrectly predicts the negative class. In the following sections, we'll look at how to evaluate classification models using metrics derived from these four outcomes.
Loss: A scalar value that we attempt to minimize during our training of the model. The lower the loss, the closer our predictions are to the true labels. This is usually Mean Squared Error (MSE) as David Maust said above, or often in Keras, Categorical Cross Entropy.
I'll briefly introduce the concepts we're trying to tackle.
From all that were positive, how many did our model predict as positive?
All that were positive =
What our model said were positive =
Since recall is inversely proportional to FN, improving it decreases FN.
From all that were negative, how many did our model predict as negative?
All that were negative =
What our model said were negative =
Since specificity is inversely proportional to FP, improving it decreases FP.
In your next searches, or whatever classification-related activity you perform, knowing these is going to give you an extra edge in communication and understanding.
So. These two concepts, as you mas have figured out already, are opposites. This means that increasing one is likely to decrease the other.
Since you want priority on recall, but don't want to loose too much in specificity, you can combine both of those and attribute weights. Following what's clearly explained in this answer:
import numpy as np
import keras.backend as K
def binary_recall_specificity(y_true, y_pred, recall_weight, spec_weight):
TN = np.logical_and(K.eval(y_true) == 0, K.eval(y_pred) == 0)
TP = np.logical_and(K.eval(y_true) == 1, K.eval(y_pred) == 1)
FP = np.logical_and(K.eval(y_true) == 0, K.eval(y_pred) == 1)
FN = np.logical_and(K.eval(y_true) == 1, K.eval(y_pred) == 0)
# Converted as Keras Tensors
TN = K.sum(K.variable(TN))
FP = K.sum(K.variable(FP))
specificity = TN / (TN + FP + K.epsilon())
recall = TP / (TP + FN + K.epsilon())
return 1.0 - (recall_weight*recall + spec_weight*specificity)
Notice recall_weight
and spec_weight
? They're weights we're attributing to each of the metrics. For distribution convention, they should always add to 1.0
¹, e.g. recall_weight=0.9
, specificity_weight=0.1
. The intention here is for you to see what proportion best suits your needs.
But Keras' loss functions must only receive (y_true, y_pred)
as arguments, so let's define a wrapper:
# Our custom loss' wrapper
def custom_loss(recall_weight, spec_weight):
def recall_spec_loss(y_true, y_pred):
return binary_recall_specificity(y_true, y_pred, recall_weight, spec_weight)
# Returns the (y_true, y_pred) loss function
return recall_spec_loss
And onto using it, we'd have
# Build model, add layers, etc
model = my_model
# Getting our loss function for specific weights
loss = custom_loss(recall_weight=0.9, spec_weight=0.1)
# Compiling the model with such loss
model.compile(loss=loss)
¹ The weights, added, must total 1.0
, because in case both recall=1.0
and specificity=1.0
(the perfect score), the formula
Shall give us, for example,
Clearly, if we got the perfect score, we'd want our loss to equal 0.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With