Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is a threshold in a Precision-Recall curve?

I am aware of the concept of Precision as well as the concept of Recall. But I am finding it very hard to understand the idea of a 'threshold' which makes any P-R curve possible.

Imagine I have a model to build that predicts the re-occurrence (yes or no) of cancer in patients using some decent classification algorithm on relevant features. I split my data for training and testing. Lets say I trained the model using the train data and got my Precision and Recall metrics using the test data.

But HOW can I draw a P-R curve now? On what basis? I just have two values, one precision and one recall. I read that its the 'Threshold' that allows you to get several precision-recall pairs. But what is that threshold? I am still a beginner and I am unable to comprehend the very concept of the threshold.

I see in so many classification model comparisons like the one below. But how do they get those many pairs?

Model Comparison Using Precision-Recall Curve

like image 973
Mr.A Avatar asked Sep 14 '17 17:09

Mr.A


People also ask

What is threshold in curve?

In graph theory, a threshold graph is a graph that can be constructed from a one-vertex graph by repeated applications of the following two operations: Addition of a single isolated vertex to the graph. Addition of a single dominating vertex to the graph, i.e. a single vertex that is connected to all other vertices.

How does threshold affect precision and recall?

The higher the threshold, the higher the precision, but the lower the recall. Note that high precision is required when you use IBM® Content Classification to generate automatic responses (these should be as accurate as possible). The ideal threshold setting is the highest possible recall and precision rate.

How does threshold affect precision?

In general, raising the classification threshold reduces false positives, thus raising precision. Definitely increase. Raising the classification threshold typically increases precision; however, precision is not guaranteed to increase monotonically as we raise the threshold.


1 Answers

ROC Curves:

  • x-axis: False Positive Rate FPR = FP /(FP + TN) = FP / N
  • y-axis: True Positive Rate TPR = Recall = TP /(TP + FN) = TP / P

Precision-Recall Curves:

  • x-axis: Recall = TP / (TP + FN) = TP / P = TPR
  • y-axis: Precision = TP / (TP + FP) = TP / PP

Your cancer detection example is a binary classification problem. Your predictions are based on a probability. The probability of (not) having cancer.

In general, an instance would be classified as A, if P(A) > 0.5 (your threshold value). For this value, you get your Recall-Precision pair based on the True Positives, True Negatives, False Positives and False Negatives.

Now, as you change your 0.5 threshold, you get a different result (different pair). You can already classify a patient as 'has cancer' for P(A) > 0.3. This will decrease Precision and increase Recall. You would rather tell someone that he has cancer even though he has not, to make sure that patients with cancer are sure to get the treatment they need. This represents the intuitive trade-off between TPR and FPR or Precision and Recall or Sensitivity and Specificity.

Let's add these terms as you see them more often common in biostatistics.

  • Sensitivity = TP / P = Recall = TPR
  • Specificity = TN / N = (1 – FPR)

ROC-curves and Precision-Recall curves visualize all these possible thresholds of your classifier.

You should consider these metrics, if accuracy alone is not a suitable quality measure. Classifying all patients as 'does not have cancer' will give you the highest accuracy but the values of your ROC and Precision-Recall curves will be 1s and 0s.

like image 74
lnathan Avatar answered Sep 21 '22 21:09

lnathan