Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to implement fact related to false positive vs. false negative balance in neural network?

I have a yes/no classification problem, where false positives are worse than false negatives.

Is there a way to implement this fact into neural network especially in MATLAB's Neural Network Toolbox?

like image 849
liborw Avatar asked Mar 25 '10 22:03

liborw


People also ask

How do you handle false positives in machine learning?

Machine learning systems help to reduce false positive rates in the following ways: Structuring data: False positive remediation involves the analysis of vast amounts of unstructured data, drawn from external sources such as media outlets, social networks, and other public and private records.

How do you reduce false positives and false negatives?

The most effective way to reduce both your false positives and negatives is using a high-quality method. This is particularly important in chromatography, though method development work is necessary in other analytical techniques.

What is a false positive and false negative and how are they significant?

A false positive is when a scientist determines something is true when it is actually false (also called a type I error). A false positive is a “false alarm.” A false negative is saying something is false when it is actually true (also called a type II error).

How can machine learning reduce false positives and false negatives?

To minimize the number of False Negatives (FN) or False Positives (FP) we can also retrain a model on the same data with slightly different output values more specific to its previous results. This method involves taking a model and training it on a dataset until it optimally reaches a global minimum.


1 Answers

What you need is a cost-sensitive meta-classifier (a meta-classifier works with any arbitrary classifier, be it ANN, SVM, or any other).

This can be done in two ways:

  • re-weighting training instances according to a cost matrix. This is done by resampling the data so that a particular class is over represented, thus the model built is more sensitive to that particular class as opposed to the other classes.
  • predicting the class with minimum expected misclassification cost (rather than the most likely class). The idea here is to minimize the total expected costs by making cheap mistakes more often and expensive mistakes less often.

One algorithm that implements the first learning approach is SECOC, which uses error-correcting codes; while an example of the second approach is the MetaCost which uses bagging to improve the probability estimates of the classifier.

like image 59
Amro Avatar answered Oct 06 '22 22:10

Amro