The Tensorflow function tf.nn.weighted_cross_entropy_with_logits()
takes the argument pos_weight
. The documentation defines pos_weight
as "A coefficient to use on the positive examples." I assume this means that increasing pos_weight
increases the loss from false positives and decreases the loss from false negatives. Or do I have that backwards?
Actually, it's the other way around. Citing documentation:
The argument
pos_weight
is used as a multiplier for the positive targets.
So, assuming you have 5
positive examples in your dataset and 7
negative, if you set the pos_weight=2
, then your loss would be as if you had 10
positive examples and 7
negative.
Assume you got all of the positive examples wrong and all negative right. Originally you would have 5
false negatives and 0
false positives. When you increase the pos_weight
, the number of false negatives will artificially increase. Note that the loss value coming from false positives doesn't change.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With