I am training a CovNet with two outputs. My training samples look like this:
[0, value_a1], [0, value_a2], ...
and
[value_b1, 0], [value_b2, 0], ....
I want to generate my own loss function and mask pairs that contain the mask_value = 0
. I have this function, though I am not sure whether it really does what I want. So, I want to write some tests.
from tensorflow.python.keras import backend as K
from tensorflow.python.keras import losses
def masked_loss_function(y_true, y_pred, mask_value=0):
'''
This model has two target values which are independent of each other.
We mask the output so that only the value that is used for training
contributes to the loss.
mask_value : is the value that is not used for training
'''
mask = K.cast(K.not_equal(y_true, mask_value), K.floatx())
return losses.mean_squared_error(y_true * mask, y_pred * mask)
Though, I don't know how I can test this function with keras? Usually, this would be passed to model.compile()
. Something like along these lines:
x = [1, 0]
y = [1, 1]
assert masked_loss_function(x, y, 0) == 0
Creating custom loss functions in Keras A custom loss function can be created by defining a function that takes the true values and predicted values as required parameters. The function should return an array of losses. The function can then be passed at the compile stage.
categorical_crossentropy: Used as a loss function for multi-class classification model where there are two or more output labels. The output label is assigned one-hot category encoding value in form of 0s and 1. The output label, if present in integer form, is converted into categorical encoding using keras.
Loss: A scalar value that we attempt to minimize during our training of the model. The lower the loss, the closer our predictions are to the true labels. This is usually Mean Squared Error (MSE) as David Maust said above, or often in Keras, Categorical Cross Entropy.
I think one way of achieving that is using a Keras backend function. Here we define a function that takes as input two tensors and returns as output a tensor:
from keras import Model
from keras import layers
x = layers.Input(shape=(None,))
y = layers.Input(shape=(None,))
loss_func = K.function([x, y], [masked_loss_function(x, y, 0)])
And now we can use loss_func
to run the computation graph we have defined:
assert loss_func([[[1,0]], [[1,1]]]) == [[0]]
Note that keras backend function, i.e. function
, expects that the input and output arguments be an array of tensors. Additionally, x
and y
takes a batch of tensors, i.e. an array of tensors, with undefined shape.
This is another workaround,
x = [1, 0]
y = [1, 1]
F = masked_loss_function(K.variable(x), K.variable(y), K.variable(0))
assert K.eval(F) == 0
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With