Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Tensorflow Precision / Recall / F1 score and Confusion matrix

I would like to know if there is a way to implement the different score function from the scikit learn package like this one :

from sklearn.metrics import confusion_matrix confusion_matrix(y_true, y_pred) 

into a tensorflow model to get the different score.

with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess: init = tf.initialize_all_variables() sess.run(init) for epoch in xrange(1):         avg_cost = 0.         total_batch = len(train_arrays) / batch_size         for batch in range(total_batch):                 train_step.run(feed_dict = {x: train_arrays, y: train_labels})                 avg_cost += sess.run(cost, feed_dict={x: train_arrays, y: train_labels})/total_batch         if epoch % display_step == 0:                 print "Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost)  print "Optimization Finished!" correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # Calculate accuracy accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print "Accuracy:", batch, accuracy.eval({x: test_arrays, y: test_labels}) 

Will i have to run the session again to get the prediction ?

like image 495
nicolasdavid Avatar asked Feb 12 '16 14:02

nicolasdavid


People also ask

How do you calculate precision and recall from confusion matrix?

Consider a model that predicts 150 examples for the positive class, 95 are correct (true positives), meaning five were missed (false negatives) and 55 are incorrect (false positives). We can calculate the precision as follows: Precision = TruePositives / (TruePositives + FalsePositives) Precision = 95 / (95 + 55)


2 Answers

You do not really need sklearn to calculate precision/recall/f1 score. You can easily express them in TF-ish way by looking at the formulas:

enter image description here

Now if you have your actual and predicted values as vectors of 0/1, you can calculate TP, TN, FP, FN using tf.count_nonzero:

TP = tf.count_nonzero(predicted * actual) TN = tf.count_nonzero((predicted - 1) * (actual - 1)) FP = tf.count_nonzero(predicted * (actual - 1)) FN = tf.count_nonzero((predicted - 1) * actual) 

Now your metrics are easy to calculate:

precision = TP / (TP + FP) recall = TP / (TP + FN) f1 = 2 * precision * recall / (precision + recall) 
like image 82
Salvador Dali Avatar answered Oct 23 '22 12:10

Salvador Dali


Maybe this example will speak to you :

    pred = multilayer_perceptron(x, weights, biases)     correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))     accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))      with tf.Session() as sess:     init = tf.initialize_all_variables()     sess.run(init)     for epoch in xrange(150):             for i in xrange(total_batch):                     train_step.run(feed_dict = {x: train_arrays, y: train_labels})                     avg_cost += sess.run(cost, feed_dict={x: train_arrays, y: train_labels})/total_batch                      if epoch % display_step == 0:                     print "Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost)      #metrics     y_p = tf.argmax(pred, 1)     val_accuracy, y_pred = sess.run([accuracy, y_p], feed_dict={x:test_arrays, y:test_label})      print "validation accuracy:", val_accuracy     y_true = np.argmax(test_label,1)     print "Precision", sk.metrics.precision_score(y_true, y_pred)     print "Recall", sk.metrics.recall_score(y_true, y_pred)     print "f1_score", sk.metrics.f1_score(y_true, y_pred)     print "confusion_matrix"     print sk.metrics.confusion_matrix(y_true, y_pred)     fpr, tpr, tresholds = sk.metrics.roc_curve(y_true, y_pred) 
like image 23
nicolasdavid Avatar answered Oct 23 '22 11:10

nicolasdavid