Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to get loss for each sample within a batch in keras?

Tags:

keras

How can I get the loss for each sample within a batch? It seems that Keras does not provide any functions meeting the demand.

like image 983
zhf061 Avatar asked Jul 18 '19 04:07

zhf061


People also ask

What is loss =' Sparse_categorical_crossentropy?

sparse_categorical_crossentropy: Used as a loss function for multi-class classification model where the output label is assigned integer value (0, 1, 2, 3…). This loss function is mathematically same as the categorical_crossentropy. It just has a different interface.

How is keras loss calculated?

Loss calculation is based on the difference between predicted and actual values. If the predicted values are far from the actual values, the loss function will produce a very large number. Keras is a library for creating neural networks.

How do I create a custom loss function in keras?

Creating custom loss functions in Keras A custom loss function can be created by defining a function that takes the true values and predicted values as required parameters. The function should return an array of losses. The function can then be passed at the compile stage.

What is loss Lstm?

That is the loss your LSTM model is minimizing. The Mean Squared Error, or MSE, loss is the default loss to use for regression problems. Mean squared error is calculated as the average of the squared differences between the predicted and actual values.


2 Answers

Keras always computes losses per sample. It has to do so in order to calculate the loss values that will be used as the basis of back propagation. The values are not typically exposed to the user other than as their average per batch but they are calculated by the loss function and then averaged for display purposes.

A very simple example model:

import tensorflow as tf
import tensorflow.keras.backend as K
keras = tf.keras

model = keras.models.Sequential([
    keras.layers.Input(shape=(4,)),
    keras.layers.Dense(1)
])

def examine_loss(y_true, y_pred):
  result = keras.losses.mean_squared_error(y_true, y_pred)
  result = K.print_tensor(result, message='losses')
  return result

model.compile('adam', examine_loss)
model.summary()

If you execute the follow test code:

import numpy as np
X = np.random.rand(100, 4)

def test_fn(x):
  return x[0] * 0.2 + x[1] * 5.0 + x[2] * 0.3 + x[3] + 0.6

y = np.apply_along_axis(test_fn, 1, X)

model.fit(X[0:4], y[0:4])

You should seem something like the following:

losses [23.2873611 26.1659927 34.1300354 6.16115761]

(Numbers will differ since they depend on random initialisation).

This may interest you or not depending on what you want to do with the individual losses. That was not clear at all from the initial question.

like image 129
Pedro Marques Avatar answered Sep 20 '22 09:09

Pedro Marques


I know this thread is a bit old, but I found a neat solution and maybe this helps someone:

import tensorflow as tf

# Suppose you have an already trained model
model = ...

loss = SparseCategoricalCrossentropy(reduction=tf.compat.v1.losses.Reduction.NONE) # The loss function is just an example, the reduction is the important one
model.compile(optimizer=model.optimizer, loss=loss) # keep your original optimizer

# And then you'll get each loss for each instance within a batch
model.evaluate(X,y, batch_size=128) 

like image 40
anon767 Avatar answered Sep 20 '22 09:09

anon767