The current tf.contrib.metrics.streaming_accuracy
is only able to calculate the top 1 accuracy, and not the top k. As a workaround, this is what I've been using:
tf.reduce_mean(tf.cast(tf.nn.in_top_k(predictions=predictions, targets=labels, k=5), tf.float32))
However, this does not give me a way to calculate the streaming accuracies averaged across each batch, which would be useful in getting a stable evaluation accuracy. I am currently manually calculating this streaming top 5 accuracy through using its numpy output, but this means I won't be able to visualize this metric on tensorboard.
Is there a way to have a simpler implementation by creating an accuracy_update function, or is there an existing function that already does this?
Thank you.
The accuracy function creates two local variables, total and count that are used to compute the frequency with which predictions matches labels . This frequency is ultimately returned as accuracy : an idempotent operation that simply divides total by count .
The accuracy function tf. metrics. accuracy calculates how often predictions matches labels based on two local variables it creates: total and count , that are used to compute the frequency with which logits matches labels .
If class_id is specified, we calculate precision by considering only the entries in the batch for which class_id is above the threshold and/or in the top-k highest predictions, and computing the fraction of them for which class_id is indeed a correct label.
Accuracy calculates the percentage of predicted values (yPred) that match with actual values (yTrue). For a record, if the predicted value is equal to the actual value, it is considered accurate. We then calculate Accuracy by dividing the number of accurately predicted records by the total number of records.
You could replace your use of tf.contrib.metrics.streaming_accuracy
by the lower-level tf.metrics.mean
, which is by the way ultimately used by streaming_accuracy
-- you will find a similarity in their respective documentations.
E.g. (not tested)
tf.metrics.mean(tf.nn.in_top_k(predictions=predictions, targets=labels, k=5))
For top-k accuracy per batch, this also works.
k_val=3
accs = []
for each_bach in range(batch_size):
acc = tf.keras.metrics.top_k_categorical_accuracy(y_true=tf_class1[each_bach], y_pred=tf_class2[each_bach], k=k_val)
accs.append(acc)
acc_data_per_batch = tf.reduce_mean(accs)
tf.keras.metrics.top_k_categorical_accuracy returns K.mean( nn.in_top_k(y_pred, math_ops.argmax(y_true, axis=-1), k), axis=-1) per batch
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With