Given a set of training examples for training a neural network, we want to give more or less weight to various examples in training. We apply a weight between 0.0 and 1.0 to each example based on some criteria for the "value" (e.g. validity or confidence) of the example. How can this be implemented in Tensorflow, in particular when using tf.nn.sparse_softmax_cross_entropy_with_logits()
?
In the most common case where you call tf.nn.sparse_softmax_cross_entropy_with_logits
with logits
of shape [batch_size, num_classes]
and labels
of shape [batch_size]
, the function returns a tensor of shape batch_size
. You can multiply this tensor with a weight tensor before reducing them to a single loss value:
weights = tf.placeholder(name="loss_weights", shape=[None], dtype=tf.float32)
loss_per_example = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels)
loss = tf.reduce_mean(weights * loss_per_example)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With