Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Tensorflow 2.0 Custom loss function with multiple inputs

I am trying to optimize a model with the following two loss functions

def loss_1(pred, weights, logits):
    weighted_sparse_ce = kls.SparseCategoricalCrossentropy(from_logits=True)
    policy_loss = weighted_sparse_ce(pred, logits, sample_weight=advantages)

and

def loss_2(y_pred, y):
    return kls.mean_squared_error(y_pred, y)

however, because TensorFlow 2 expects loss function to be of the form

def fn(y_pred, y_true):
    ...

I am using a work-around for loss_1 where I pack pred and weights into a single tensor before passing to loss_1 in the call to model.fit and then unpack them in loss_1. This is inelegant and nasty because pred and weights are of different data types and so this requires an additional cast, pack, un-pack and un-cast each time I call model.fit.

Furthermore, I am aware of the sample_weight argument to fit, which is kind of like the solution to this question. This might be a workable solution were it not for the fact that I am using two loss functions and I only want the sample_weight applied to one of them. Also, even if this were a solution, would it not be generalizable to other types of custom loss functions.


All that being said, my question, said concisely, is:

What is the best way to create a loss function with an arbitrary number of arguments in TensorFlow 2?

Another thing I have tried is passing a tf.tuple but that also seems to violate TensorFlow's desires for a loss function input.

like image 411
Jon Deaton Avatar asked Sep 20 '19 06:09

Jon Deaton


People also ask

How do you customize a loss function?

A custom loss function can be created by defining a function that takes the true values and predicted values as required parameters. The function should return an array of losses. The function can then be passed at the compile stage.

What is loss =' Sparse_categorical_crossentropy?

sparse_categorical_crossentropy: Used as a loss function for multi-class classification model where the output label is assigned integer value (0, 1, 2, 3…). This loss function is mathematically same as the categorical_crossentropy. It just has a different interface.

What is loss in Tensorflow training?

We use a loss function to determine how far the predicted values deviate from the actual values in the training data. We change the model weights to make the loss minimum, and that is what training is all about.


2 Answers

To expand on Jon's answer. In case you want to still have the benefits of a Keras Model you can expand the model class and write your own custom train_step:

from tensorflow.python.keras.engine import data_adapter

# custom loss function that takes two outputs of the model
# as input parameters which would otherwise not be possible
def custom_loss(gt, x, y):
    return tf.reduce_mean(x) + tf.reduce_mean(y)

class CustomModel(keras.Model):
    def compile(self, optimizer, my_loss):
        super().compile(optimizer)
        self.my_loss = my_loss

    def train_step(self, data):
        data = data_adapter.expand_1d(data)
        input_data, gt, sample_weight = data_adapter.unpack_x_y_sample_weight(data)

        with tf.GradientTape() as tape:
            y_pred = self(input_data, training=True)
            loss_value = self.my_loss(gt, y_pred[0], y_pred[1])

        grads = tape.gradient(loss_value, self.trainable_variables)
        self.optimizer.apply_gradients(zip(grads, self.trainable_variables))

        return {"loss_value": loss_value}

...

model = CustomModel(inputs=input_tensor0, outputs=[x, y])
model.compile(optimizer=tf.keras.optimizers.Adam(), my_loss=custom_loss)
like image 132
Jodo Avatar answered Oct 12 '22 23:10

Jodo


This problem can be easily solved using custom training in TF2. You need only compute your two-component loss function within a GradientTape context and then call an optimizer with the produced gradients. For example, you could create a function custom_loss which computes both losses given the arguments to each:

def custom_loss(model, loss1_args, loss2_args):
  # model: tf.model.Keras
  # loss1_args: arguments to loss_1, as tuple.
  # loss2_args: arguments to loss_2, as tuple.
  with tf.GradientTape() as tape:
    l1_value = loss_1(*loss1_args)
    l2_value = loss_2(*loss2_args)
    loss_value = [l1_value, l2_value]
  return loss_value, tape.gradient(loss_value, model.trainable_variables)

# In training loop:
loss_values, grads = custom_loss(model, loss1_args, loss2_args)
optimizer.apply_gradients(zip(grads, model.trainable_variables))

In this way, each loss function can take an arbitrary number of eager tensors, regardless of whether they are inputs or outputs to the model. The sets of arguments to each loss function need not be disjoint as shown in this example.

like image 40
Jon Deaton Avatar answered Oct 13 '22 00:10

Jon Deaton