Suppose I have logits like
[[4.3, -0.5, -2.7, 0, 0],
[0.5, 2.3, 0, 0, 0]]
where clearly the last two in the first example and last three in the second example are masked (that is, they are zero) and should not affect loss and gradient computations.
How do I compute cross-entropy loss between this logits and corresponding labels? For sanity, the labels for this example can be something like
[[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0]]
(One issue: Softmax, followed by log, on the logits will be applicable for the masked zeroes too and tf's cross-entropy method will consider the loss for those elements too.)
(Also, you can think about the problem like this: I have logits of varying lengths in a batch, i.e. my logits were length 3 and 2 for eg.1 and eg.2 respectively. Same is followed by the labels.)
Masking the cross-entropy loss is a common operation, covered by the library. It actually handles the more general concept of weights; Provide binary weights for masking.
mask = tf.equal(logits, 0) # as in the OP
weights = tf.to_float(mask) # convert to (0, 1) weights
loss = tf.losses.softmax_cross_entropy(labels, logits, weights)
Don't compute softmax cross entropy by actually computing the softmax of the ouput then the cross-entropy, you loose the computational precision and stability of doing it simultaneously.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With