i have tested "softmax_cross_entropy_with_logits_v2" with a random number
import tensorflow as tf
x = tf.placeholder(tf.float32,shape=[None,5])
y = tf.placeholder(tf.float32,shape=[None,5])
softmax = tf.nn.softmax_cross_entropy_with_logits_v2(logits=x,labels=y)
with tf.Session() as sess:
feedx=[[0.1,0.2,0.3,0.4,0.5],[0.,0.,0.,0.,1.]]
feedy=[[1.,0.,0.,0.,0.],[0.,0.,0.,0.,1.]]
softmax = sess.run(softmax, feed_dict={x:feedx, y:feedy})
print("softmax", softmax)
console "softmax [1.8194163 0.9048325]"
what i understand about this function was This function only returns cost when logits and labels are different.
then why it returns 0.9048325 even same value?
The way tf.nn.softmax_cross_entropy_with_logits_v2
works is that it does softmax on your x
array to turn the array into probabilities:
where i
is the index of your array. Then the output of tf.nn.softmax_cross_entropy_with_logits_v2
will be the dotproduct between -log(p)
and the labels:
Since the labels are either 0 or 1, only the term where the label is equal to one contributes. So in your first sample, the softmax probability of the first index is
and the output will be
Your second sample will be different, since x[0]
is different than x[1]
.
tf.nn.softmax_cross_etnropy_with_logits_v2
as per the documentation expects unscaled inputs, because it performs a softmax
operation on logits
internally. Your second input [0, 0, 0, 0, 1] thus is internally softmaxed to something roughly like [0.15, 0.15, 0.15, 0.15, 0.4] and then, cross entropy for this logit and the true label [0, 0, 0, 0, 1] is computed to be the value you get
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With