Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

A simple explanation of Naive Bayes Classification [closed]

I am finding it hard to understand the process of Naive Bayes, and I was wondering if someone could explain it with a simple step by step process in English. I understand it takes comparisons by times occurred as a probability, but I have no idea how the training data is related to the actual dataset.

Please give me an explanation of what role the training set plays. I am giving a very simple example for fruits here, like banana for example

training set--- round-red round-orange oblong-yellow round-red  dataset---- round-red round-orange round-red round-orange oblong-yellow round-red round-orange oblong-yellow oblong-yellow round-red 
like image 724
Jaggerjack Avatar asked Apr 08 '12 00:04

Jaggerjack


People also ask

What is naive Bayesian classification and explain it?

The Naive Bayes classification algorithm is a probabilistic classifier. It is based on probability models that incorporate strong independence assumptions. The independence assumptions often do not have an impact on reality. Therefore they are considered as naive.

What is the main idea of naive Bayesian classification?

The thought behind naive Bayes classification is to try to classify the data by maximizing P(O|Ci)P(Ci) using Bayes theorem of posterior probability (where O is the Object or tuple in a dataset and “i” is an index of the class).


2 Answers

The accepted answer has many elements of k-NN (k-nearest neighbors), a different algorithm.

Both k-NN and NaiveBayes are classification algorithms. Conceptually, k-NN uses the idea of "nearness" to classify new entities. In k-NN 'nearness' is modeled with ideas such as Euclidean Distance or Cosine Distance. By contrast, in NaiveBayes, the concept of 'probability' is used to classify new entities.

Since the question is about Naive Bayes, here's how I'd describe the ideas and steps to someone. I'll try to do it with as few equations and in plain English as much as possible.

First, Conditional Probability & Bayes' Rule

Before someone can understand and appreciate the nuances of Naive Bayes', they need to know a couple of related concepts first, namely, the idea of Conditional Probability, and Bayes' Rule. (If you are familiar with these concepts, skip to the section titled Getting to Naive Bayes')

Conditional Probability in plain English: What is the probability that something will happen, given that something else has already happened.

Let's say that there is some Outcome O. And some Evidence E. From the way these probabilities are defined: The Probability of having both the Outcome O and Evidence E is: (Probability of O occurring) multiplied by the (Prob of E given that O happened)

One Example to understand Conditional Probability:

Let say we have a collection of US Senators. Senators could be Democrats or Republicans. They are also either male or female.

If we select one senator completely randomly, what is the probability that this person is a female Democrat? Conditional Probability can help us answer that.

Probability of (Democrat and Female Senator)= Prob(Senator is Democrat) multiplied by Conditional Probability of Being Female given that they are a Democrat.

  P(Democrat & Female) = P(Democrat) * P(Female | Democrat)  

We could compute the exact same thing, the reverse way:

  P(Democrat & Female) = P(Female) * P(Democrat | Female)  

Understanding Bayes Rule

Conceptually, this is a way to go from P(Evidence| Known Outcome) to P(Outcome|Known Evidence). Often, we know how frequently some particular evidence is observed, given a known outcome. We have to use this known fact to compute the reverse, to compute the chance of that outcome happening, given the evidence.

P(Outcome given that we know some Evidence) = P(Evidence given that we know the Outcome) times Prob(Outcome), scaled by the P(Evidence)

The classic example to understand Bayes' Rule:

Probability of Disease D given Test-positive =                  P(Test is positive|Disease) * P(Disease)      _______________________________________________________________      (scaled by) P(Testing Positive, with or without the disease) 

Now, all this was just preamble, to get to Naive Bayes.

Getting to Naive Bayes'

So far, we have talked only about one piece of evidence. In reality, we have to predict an outcome given multiple evidence. In that case, the math gets very complicated. To get around that complication, one approach is to 'uncouple' multiple pieces of evidence, and to treat each of piece of evidence as independent. This approach is why this is called naive Bayes.

P(Outcome|Multiple Evidence) =  P(Evidence1|Outcome) * P(Evidence2|outcome) * ... * P(EvidenceN|outcome) * P(Outcome) scaled by P(Multiple Evidence) 

Many people choose to remember this as:

                      P(Likelihood of Evidence) * Prior prob of outcome P(outcome|evidence) = _________________________________________________                                          P(Evidence) 

Notice a few things about this equation:

  • If the Prob(evidence|outcome) is 1, then we are just multiplying by 1.
  • If the Prob(some particular evidence|outcome) is 0, then the whole prob. becomes 0. If you see contradicting evidence, we can rule out that outcome.
  • Since we divide everything by P(Evidence), we can even get away without calculating it.
  • The intuition behind multiplying by the prior is so that we give high probability to more common outcomes, and low probabilities to unlikely outcomes. These are also called base rates and they are a way to scale our predicted probabilities.

How to Apply NaiveBayes to Predict an Outcome?

Just run the formula above for each possible outcome. Since we are trying to classify, each outcome is called a class and it has a class label. Our job is to look at the evidence, to consider how likely it is to be this class or that class, and assign a label to each entity. Again, we take a very simple approach: The class that has the highest probability is declared the "winner" and that class label gets assigned to that combination of evidences.

Fruit Example

Let's try it out on an example to increase our understanding: The OP asked for a 'fruit' identification example.

Let's say that we have data on 1000 pieces of fruit. They happen to be Banana, Orange or some Other Fruit. We know 3 characteristics about each fruit:

  1. Whether it is Long
  2. Whether it is Sweet and
  3. If its color is Yellow.

This is our 'training set.' We will use this to predict the type of any new fruit we encounter.

Type           Long | Not Long || Sweet | Not Sweet || Yellow |Not Yellow|Total              ___________________________________________________________________ Banana      |  400  |    100   || 350   |    150    ||  450   |  50      |  500 Orange      |    0  |    300   || 150   |    150    ||  300   |   0      |  300 Other Fruit |  100  |    100   || 150   |     50    ||   50   | 150      |  200             ____________________________________________________________________ Total       |  500  |    500   || 650   |    350    ||  800   | 200      | 1000              ___________________________________________________________________ 

We can pre-compute a lot of things about our fruit collection.

The so-called "Prior" probabilities. (If we didn't know any of the fruit attributes, this would be our guess.) These are our base rates.

 P(Banana)      = 0.5 (500/1000)  P(Orange)      = 0.3  P(Other Fruit) = 0.2 

Probability of "Evidence"

p(Long)   = 0.5 P(Sweet)  = 0.65 P(Yellow) = 0.8 

Probability of "Likelihood"

P(Long|Banana) = 0.8 P(Long|Orange) = 0  [Oranges are never long in all the fruit we have seen.]  ....  P(Yellow|Other Fruit)     =  50/200 = 0.25 P(Not Yellow|Other Fruit) = 0.75 

Given a Fruit, how to classify it?

Let's say that we are given the properties of an unknown fruit, and asked to classify it. We are told that the fruit is Long, Sweet and Yellow. Is it a Banana? Is it an Orange? Or Is it some Other Fruit?

We can simply run the numbers for each of the 3 outcomes, one by one. Then we choose the highest probability and 'classify' our unknown fruit as belonging to the class that had the highest probability based on our prior evidence (our 1000 fruit training set):

P(Banana|Long, Sweet and Yellow)        P(Long|Banana) * P(Sweet|Banana) * P(Yellow|Banana) * P(banana)     = _______________________________________________________________                       P(Long) * P(Sweet) * P(Yellow)                            = 0.8 * 0.7 * 0.9 * 0.5 / P(evidence)      = 0.252 / P(evidence)   P(Orange|Long, Sweet and Yellow) = 0   P(Other Fruit|Long, Sweet and Yellow)       P(Long|Other fruit) * P(Sweet|Other fruit) * P(Yellow|Other fruit) * P(Other Fruit)     = ____________________________________________________________________________________                                           P(evidence)      = (100/200 * 150/200 * 50/200 * 200/1000) / P(evidence)      = 0.01875 / P(evidence) 

By an overwhelming margin (0.252 >> 0.01875), we classify this Sweet/Long/Yellow fruit as likely to be a Banana.

Why is Bayes Classifier so popular?

Look at what it eventually comes down to. Just some counting and multiplication. We can pre-compute all these terms, and so classifying becomes easy, quick and efficient.

Let z = 1 / P(evidence). Now we quickly compute the following three quantities.

P(Banana|evidence) = z * Prob(Banana) * Prob(Evidence1|Banana) * Prob(Evidence2|Banana) ... P(Orange|Evidence) = z * Prob(Orange) * Prob(Evidence1|Orange) * Prob(Evidence2|Orange) ... P(Other|Evidence)  = z * Prob(Other)  * Prob(Evidence1|Other)  * Prob(Evidence2|Other)  ... 

Assign the class label of whichever is the highest number, and you are done.

Despite the name, Naive Bayes turns out to be excellent in certain applications. Text classification is one area where it really shines.

Hope that helps in understanding the concepts behind the Naive Bayes algorithm.

like image 88
Ram Narasimhan Avatar answered Sep 28 '22 11:09

Ram Narasimhan


Your question as I understand it is divided in two parts, part one being you need a better understanding of the Naive Bayes classifier & part two being the confusion surrounding Training set.

In general all of Machine Learning Algorithms need to be trained for supervised learning tasks like classification, prediction etc. or for unsupervised learning tasks like clustering.

During the training step, the algorithms are taught with a particular input dataset (training set) so that later on we may test them for unknown inputs (which they have never seen before) for which they may classify or predict etc (in case of supervised learning) based on their learning. This is what most of the Machine Learning techniques like Neural Networks, SVM, Bayesian etc. are based upon.

So in a general Machine Learning project basically you have to divide your input set to a Development Set (Training Set + Dev-Test Set) & a Test Set (or Evaluation set). Remember your basic objective would be that your system learns and classifies new inputs which they have never seen before in either Dev set or test set.

The test set typically has the same format as the training set. However, it is very important that the test set be distinct from the training corpus: if we simply reused the training set as the test set, then a model that simply memorized its input, without learning how to generalize to new examples, would receive misleadingly high scores.

In general, for an example, 70% of our data can be used as training set cases. Also remember to partition the original set into the training and test sets randomly.

Now I come to your other question about Naive Bayes.

To demonstrate the concept of Naïve Bayes Classification, consider the example given below:

enter image description here

As indicated, the objects can be classified as either GREEN or RED. Our task is to classify new cases as they arrive, i.e., decide to which class label they belong, based on the currently existing objects.

Since there are twice as many GREEN objects as RED, it is reasonable to believe that a new case (which hasn't been observed yet) is twice as likely to have membership GREEN rather than RED. In the Bayesian analysis, this belief is known as the prior probability. Prior probabilities are based on previous experience, in this case the percentage of GREEN and RED objects, and often used to predict outcomes before they actually happen.

Thus, we can write:

Prior Probability of GREEN: number of GREEN objects / total number of objects

Prior Probability of RED: number of RED objects / total number of objects

Since there is a total of 60 objects, 40 of which are GREEN and 20 RED, our prior probabilities for class membership are:

Prior Probability for GREEN: 40 / 60

Prior Probability for RED: 20 / 60

Having formulated our prior probability, we are now ready to classify a new object (WHITE circle in the diagram below). Since the objects are well clustered, it is reasonable to assume that the more GREEN (or RED) objects in the vicinity of X, the more likely that the new cases belong to that particular color. To measure this likelihood, we draw a circle around X which encompasses a number (to be chosen a priori) of points irrespective of their class labels. Then we calculate the number of points in the circle belonging to each class label. From this we calculate the likelihood:

enter image description here

enter image description here

From the illustration above, it is clear that Likelihood of X given GREEN is smaller than Likelihood of X given RED, since the circle encompasses 1 GREEN object and 3 RED ones. Thus:

enter image description here

enter image description here

Although the prior probabilities indicate that X may belong to GREEN (given that there are twice as many GREEN compared to RED) the likelihood indicates otherwise; that the class membership of X is RED (given that there are more RED objects in the vicinity of X than GREEN). In the Bayesian analysis, the final classification is produced by combining both sources of information, i.e., the prior and the likelihood, to form a posterior probability using the so-called Bayes' rule (named after Rev. Thomas Bayes 1702-1761).

enter image description here

Finally, we classify X as RED since its class membership achieves the largest posterior probability.

like image 37
Yavar Avatar answered Sep 28 '22 11:09

Yavar