I am running a first test of a convolutional neural network with tensor flow. I adapted the recommended method with queue runners from the programming guide (see session definition below). Output is the last result from the cnn (here is only this last step given). label_batch_vector is the training label batch.
output = tf.matmul(h_pool2_flat, W_fc1) + b_fc1
label_batch_vector = tf.one_hot(label_batch, 33)
correct_prediction = tf.equal(tf.argmax(output, 1), tf.argmax(label_batch_vector, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())
print_accuracy = tf.Print(accuracy, [accuracy])
# Create a session for running operations in the Graph.
sess = tf.Session()
# Initialize the variables (like the epoch counter).
sess.run(init_op)
# Start input enqueue threads.
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
while not coord.should_stop():
# Run training steps or whatever
sess.run(train_step)
sess.run(print_accuracy)
except tf.errors.OutOfRangeError:
print('Done training -- epoch limit reached')
finally:
# When done, ask the threads to stop.
coord.request_stop()
# Wait for threads to finish.
coord.join(threads)
sess.close()
My problem is that accuracy is calculated for each batch and I would like it calculated for each epoch. I would need to do the following: initialize a epoch_accuracy tensor, for each of the calculated batch accuracies in the epoch add it to the epoch_accuracy. At the end of the epoch show the calculated training set accuracy. However I am not finding any such example with the this queue threads that I implemented (which is actually the recommended method from TensorFlow). Can anyone help ?
Are there better ways to calculate the training accuracy? In method 1, you calculate accuracy as 1-CV_Error which is reliable and makes sense. In method 2, you give samples to trained classifier for the second time. The obtained accuracy is not much reliable as the method 1.
I am using crossval and found two ways of calculating the training accuracy. Method 1 After creating a partitioned model using crossval function using the formula (1 - kfoldLoss (partitionedModel, 'LossFun', 'ClassifError') to calculate the accuracy.
This means that your model is bad at predicting Class 1, but even worse at predicting Class 2. The accuracy just for Class 1 is 77/99 while the accuracy for Class 2 is 2/6. Highly active question.
The obtained accuracy is not much reliable as the method 1. Of course, you get different accuracy from method 1 and method 2. And I think the first method for caculating accuracy is enough.
To compute accuracy on the stream of data (your sequence of batches, here), you can use the tf.metrics.accuracy
function in tensorflow. See its doc here
You define the op like this
_, accuracy = tf.metrics.accuracy(y_true, y_pred)
Then you can update the accuracy in this way:
sess.run(accuracy)
PS: all functions in tf.metrics
(auc, recall, etc.) support streaming
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With