As a School assignment i'm required to implement Naïve Bayes algorithm which i am intending to do in Java.
In trying to understand how its done, i've read the book "Data Mining - Practical Machine Learning Tools and Techniques" which has a section on this topic but am still unsure on some primary points that are blocking my progress.
Since i'm seeking guidance not solution in here, i'll tell you guys what i thinking in my head, what i think is the correct approach and in return ask for correction/guidance which will very much be appreciated. please note that i am an absolute beginner on Naïve Bayes algorithm, Data mining and in general programming so you might see stupid comments/calculations below:
The training data set i'm given has 4 attributes/features that are numeric and normalized(in range[0 1]) using Weka (no missing values)and one nominal class(yes/no)
1) The data coming from a csv file is numeric HENCE
(array class yes and array class no)
sum of the values in row / number of values in that row
) and standard divination for each of the 4 attributes (columns) of each class(n-mean)^2/(2*SD^2),
P( yes | E)
and P( no | E)
i multiply the PDF value of all 4 given attributes and compare which is larger
, which indicates the class it belongs to In temrs of Java, i'm using ArrayList of ArrayList
and Double
to store the attribute values.
lastly i'm unsure how to to get new data? Should i ask for input file (like csv) or command prompt and ask for 4 values?
I'll stop here for now (do have more questions) but I'm worried this won't get any responses given how long its got. I will really appreciate for those that give their time reading my problems and comment.
In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature. For example, a fruit may be considered to be an apple if it is red, round, and about 3 inches in diameter.
In the Naive Bayes classifier, why do we have to normalize the probabilities after calculating the probabilities of each hypothesis? If you want to have the responsibility of each class for a single datapoint, you must normalize to 1, otherwise it can not be interpreted as a probability.
Assumptions of Naive Bayes We should try to apply the Naive Bayes formula on the above dataset however before that, we need to do some precomputations on our dataset. We also need the probabilities (P(y)), which are calculated in the table below. For example, P(Pet Animal = NO) = 6/14.
What you are doing is almost correct.
+ Then to find P( yes | E) and P( no | E) i multiply the PDF value of all 4 given attributes and compare which is larger, which indicates the class it belongs to
Here, you forgot to multiply the prior P(yes) or P(no). Remember the decision formulae:
P(Yes | E) ~= P(Attr_1 | Yes) * P(Attr_2 | Yes) * P(Attr_3 | Yes) * P(Attr_4 | Yes) * P(Yes)
For Naive Bayes (and any other supervised learning/classification algorithms), you need to have training data and testing data. You use training data to train the model and do prediction on the testing data. You could simply use training data as testing data. Or you can split the csv file into two pieces, one for training and one for testing. You could also do cross validation on the csv file.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With