Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Custom loss function in Keras based on the input data

Tags:

keras

I am trying to create the custom loss function using Keras. I want to compute the loss function based on the input and predicted the output of the neural network.

I tried using the customloss function in Keras. I think y_true is the output that we give for training and y_pred is the predicted output of the neural network. The below loss function is same as "mean_squared_error" loss in Keras.

def customloss(y_true, y_pred):
    return K.mean(K.square(y_pred - y_true), axis=-1)

I would like to use the input to the neural network also to compute the custom loss function in addition to mean_squared_error loss. Is there a way to send an input to the neural network as an argument to the customloss function.

Thank you.

like image 870
user3443033 Avatar asked Mar 31 '19 21:03

user3443033


People also ask

How do I create a custom loss function in keras?

Creating custom loss functions in Keras A custom loss function can be created by defining a function that takes the true values and predicted values as required parameters. The function should return an array of losses. The function can then be passed at the compile stage.

What is loss function keras?

The purpose of loss functions is to compute the quantity that a model should seek to minimize during training.

What is Sparse_categorical_crossentropy loss function?

sparse_categorical_crossentropy: Used as a loss function for multi-class classification model where the output label is assigned integer value (0, 1, 2, 3…). This loss function is mathematically same as the categorical_crossentropy. It just has a different interface.

What is binary cross entropy loss in keras?

BinaryCrossentropy class Computes the cross-entropy loss between true labels and predicted labels. Use this cross-entropy loss for binary (0 or 1) classification applications. The loss function requires the following inputs: y_true (true label): This is either 0 or 1.


2 Answers

I have come across 2 solutions to the question you asked.

  1. You can pass your input tensor as an argument to the custom loss wrapper function.
    def custom_loss(i):

        def loss(y_true, y_pred):
            return K.mean(K.square(y_pred - y_true), axis=-1) + something with i...
        return loss

    def baseline_model():
        # create model
        i = Input(shape=(5,))
        x = Dense(5, kernel_initializer='glorot_uniform', activation='linear')(i)
        o = Dense(1, kernel_initializer='normal', activation='linear')(x)
        model = Model(i, o)
        model.compile(loss=custom_loss(i), optimizer=Adam(lr=0.0005))
        return model

This solution is also mentioned in the accepted answer here

  1. You can pad your label with extra data columns from input and write a custom loss. This is helpful if you just want one/few feature column(s) from your input.
    def custom_loss(data, y_pred):

        y_true = data[:, 0]
        i = data[:, 1]
        return K.mean(K.square(y_pred - y_true), axis=-1) + something with i...


    def baseline_model():
        # create model
        i = Input(shape=(5,))
        x = Dense(5, kernel_initializer='glorot_uniform', activation='linear')(i)
        o = Dense(1, kernel_initializer='normal', activation='linear')(x)
        model = Model(i, o)
        model.compile(loss=custom_loss, optimizer=Adam(lr=0.0005))
        return model


    model.fit(X, np.append(Y_true, X[:, 0], axis =1), batch_size = batch_size, epochs=90, shuffle=True, verbose=1)

This solution can be found also here in this thread.

I have only used the 2nd method when I had to use input feature columns in the loss. I have used the first method with scalar arguments; but I believe a tensor input works as well.

like image 108
Anakin Avatar answered Oct 04 '22 05:10

Anakin


You could wrap your custom loss with another function that takes the input tensor as an argument:

def customloss(x):
    def loss(y_true, y_pred):
        # Use x here as you wish
        err = K.mean(K.square(y_pred - y_true), axis=-1)
        return err

    return loss

And then compile your model as follows:

model.compile('sgd', customloss(x))

where x is your input tensor.

NOTE: Not tested.

like image 20
rvinas Avatar answered Oct 04 '22 04:10

rvinas