Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to test a custom loss function in keras?

I am training a CovNet with two outputs. My training samples look like this:

[0, value_a1], [0, value_a2], ...

and

[value_b1, 0], [value_b2, 0], ....

I want to generate my own loss function and mask pairs that contain the mask_value = 0. I have this function, though I am not sure whether it really does what I want. So, I want to write some tests.

from tensorflow.python.keras import backend as K
from tensorflow.python.keras import losses

def masked_loss_function(y_true, y_pred, mask_value=0):
    '''
    This model has two target values which are independent of each other.
    We mask the output so that only the value that is used for training 
    contributes to the loss.
        mask_value : is the value that is not used for training
    '''
    mask = K.cast(K.not_equal(y_true, mask_value), K.floatx())
    return losses.mean_squared_error(y_true * mask, y_pred * mask)

Though, I don't know how I can test this function with keras? Usually, this would be passed to model.compile(). Something like along these lines:

x = [1, 0]
y = [1, 1]
assert masked_loss_function(x, y, 0) == 0
like image 400
Soren Avatar asked Jun 14 '18 16:06

Soren


People also ask

How do I use custom loss function in Keras?

Creating custom loss functions in Keras A custom loss function can be created by defining a function that takes the true values and predicted values as required parameters. The function should return an array of losses. The function can then be passed at the compile stage.

What is Categorical_crossentropy in Keras?

categorical_crossentropy: Used as a loss function for multi-class classification model where there are two or more output labels. The output label is assigned one-hot category encoding value in form of 0s and 1. The output label, if present in integer form, is converted into categorical encoding using keras.

What is loss value in Keras?

Loss: A scalar value that we attempt to minimize during our training of the model. The lower the loss, the closer our predictions are to the true labels. This is usually Mean Squared Error (MSE) as David Maust said above, or often in Keras, Categorical Cross Entropy.


2 Answers

I think one way of achieving that is using a Keras backend function. Here we define a function that takes as input two tensors and returns as output a tensor:

from keras import Model
from keras import layers

x = layers.Input(shape=(None,))
y = layers.Input(shape=(None,))
loss_func = K.function([x, y], [masked_loss_function(x, y, 0)])

And now we can use loss_func to run the computation graph we have defined:

assert loss_func([[[1,0]], [[1,1]]]) == [[0]]

Note that keras backend function, i.e. function, expects that the input and output arguments be an array of tensors. Additionally, x and y takes a batch of tensors, i.e. an array of tensors, with undefined shape.

like image 187
today Avatar answered Sep 25 '22 09:09

today


This is another workaround,

x = [1, 0]
y = [1, 1]
F = masked_loss_function(K.variable(x), K.variable(y), K.variable(0))
assert K.eval(F) == 0 
like image 32
Mehdi Avatar answered Sep 21 '22 09:09

Mehdi