I am working on a multi label problem and i am trying to determine the accuracy of my model.
My model:
NUM_CLASSES = 361
x = tf.placeholder(tf.float32, [None, IMAGE_PIXELS])
y_ = tf.placeholder(tf.float32, [None, NUM_CLASSES])
# create the network
pred = conv_net( x )
# loss
cost = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits( pred, y_) )
# train step
train_step = tf.train.AdamOptimizer().minimize( cost )
i want to calculate the accuracy in two different ways
- % of all labels that are predicted correctly
- % of images where ALL labels are predicted correctly
unfortunately i am only able to calculate the % of all labels that are predicted correctly.
I thought this code would calculate % of images where ALL labels are predicted correctly
correct_prediction = tf.equal( tf.round( pred ), tf.round( y_ ) )
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
and this code % of all labels that are predicted correctly
pred_reshape = tf.reshape( pred, [ BATCH_SIZE * NUM_CLASSES, 1 ] )
y_reshape = tf.reshape( y_, [ BATCH_SIZE * NUM_CLASSES, 1 ] )
correct_prediction_all = tf.equal( tf.round( pred_reshape ), tf.round( y_reshape ) )
accuracy_all = tf.reduce_mean( tf.cast(correct_prediction_all, tf.float32 ) )
somehow the coherency of the labels belonging to one image is lost and i am not sure why.
We can sum up the values across classes to obtain global FP, FN, TP, and TN counts for the classifier as a whole. This would allow us to compute a global accuracy score using the formula for accuracy. Accuracy = (4 + 3) / (4 + 3 + 2 + 3) = 7 / 12 = 0.583 = 58%.
You need to create the accuracy yourself in model_fn using tf. metrics. accuracy and pass it to eval_metric_ops that will be returned by the function. Then the output of estimator.
Hamming loss is the fraction of wrong labels to the total number of labels. In multi-class classification, hamming loss is calculated as the hamming distance between y_true and y_pred . In multi-label classification, hamming loss penalizes only the individual labels.
Multi-label classification involves predicting zero or more class labels. Unlike normal classification tasks where class labels are mutually exclusive, multi-label classification requires specialized machine learning algorithms that support predicting multiple mutually non-exclusive classes or “labels.”
I believe the bug in your code is in: correct_prediction = tf.equal( tf.round( pred ), tf.round( y_ ) )
.
pred
should be unscaled logits (i.e. without a final sigmoid).
Here you want to compare the output of sigmoid(pred)
and y_
(both in the interval [0, 1]
) so you have to write:
correct_prediction = tf.equal(tf.round(tf.nn.sigmoid(pred)), tf.round(y_))
Then to compute:
accuracy1 = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
all_labels_true = tf.reduce_min(tf.cast(correct_prediction), tf.float32), 1)
accuracy2 = tf.reduce_mean(all_labels_true)
# to get the mean accuracy over all labels, prediction_tensor are scaled logits (i.e. with final sigmoid layer)
correct_prediction = tf.equal( tf.round( prediction_tensor ), tf.round( ground_truth_tensor ) )
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# to get the mean accuracy where all labels need to be correct
all_labels_true = tf.reduce_min(tf.cast(correct_prediction, tf.float32), 1)
accuracy2 = tf.reduce_mean(all_labels_true)
reference: https://gist.github.com/sbrodehl/2120a95d57963a289cc23bcfb24bee1b
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With