Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to interpret weka classification?

How can we interpret the classification result in weka using naive bayes?

How is mean, std deviation, weight sum and precision calculated?

How is kappa statistic, mean absolute error, root mean squared error etc calculated?

What is the interpretation of the confusion matrix?

like image 666
user349821 Avatar asked May 25 '10 10:05

user349821


People also ask

What is classification accuracy in Weka?

Our classifier has got an accuracy of 92.4%. Weka even prints the Confusion matrix for you which gives different metrics.

How do you calculate classification accuracy in Weka?

You can see the correctly classified instances reported in the summary part (a little bit above the part it's reporting the accuracy by class). in front of this part you can see a number (which indicates the number of instances) and a percentage (which is the accuracy).

How to create a decision tree classification in Weka?

For decision tree classification, we need a database. #1) Open WEKA explorer. #2) Select weather.nominal.arff file from the “choose file” under the preprocess tab option. #3) Go to the “Classify” tab for classifying the unclassified data. Click on the “Choose” button. From this, select “trees -> J48”.

How to classify unclassified data in Weka?

#1) Open WEKA explorer. #2) Select weather.nominal.arff file from the “choose file” under the preprocess tab option. #3) Go to the “Classify” tab for classifying the unclassified data. Click on the “Choose” button. From this, select “trees -> J48”. Let us also have a quick look at other options in the Choose button:

How do I view the results of a Weka tree?

If you are using Weka Explorer, you can right click on the result row in the results list (located on the left of the window under the start button). Then select visualize tree. This will display an image of the tree. If you still want to understand the results as they are shown in your question: The results are displayed as tree.

How to use weka to test your ideas?

Anyway, that’s what WEKA is all about. It allows you to test your ideas quickly. To see the visual representation of the results, right click on the result in the Result list box. Several options would pop up on the screen as shown here − Select Visualize tree to get a visual representation of the traversal tree as seen in the screenshot below −


2 Answers

Below is some sample output for a naive Bayes classifier, using 10-fold cross-validation. There's a lot of information there, and what you should focus on depends on your application. I'll explain some of the results below, to get you started.

=== Stratified cross-validation === === Summary ===  Correctly Classified Instances          71               71      % Incorrectly Classified Instances        29               29      % Kappa statistic                          0.3108 Mean absolute error                      0.3333 Root mean squared error                  0.4662 Relative absolute error                 69.9453 % Root relative squared error             95.5466 % Total Number of Instances              100       === Detailed Accuracy By Class ===                 TP Rate   FP Rate   Precision   Recall  F-Measure   ROC Area  Class                  0.967     0.692      0.686     0.967     0.803      0.709    0                  0.308     0.033      0.857     0.308     0.453      0.708    1 Weighted Avg.    0.71      0.435      0.753     0.71      0.666      0.709  === Confusion Matrix ===    a  b   <-- classified as  59  2 |  a = 0  27 12 |  b = 1 

The correctly and incorrectly classified instances show the percentage of test instances that were correctly and incorrectly classified. The raw numbers are shown in the confusion matrix, with a and b representing the class labels. Here there were 100 instances, so the percentages and raw numbers add up, aa + bb = 59 + 12 = 71, ab + ba = 27 + 2 = 29.

The percentage of correctly classified instances is often called accuracy or sample accuracy. It has some disadvantages as a performance estimate (not chance corrected, not sensitive to class distribution), so you'll probably want to look at some of the other numbers. ROC Area, or area under the ROC curve, is my preferred measure.

Kappa is a chance-corrected measure of agreement between the classifications and the true classes. It's calculated by taking the agreement expected by chance away from the observed agreement and dividing by the maximum possible agreement. A value greater than 0 means that your classifier is doing better than chance (it really should be!).

The error rates are used for numeric prediction rather than classification. In numeric prediction, predictions aren't just right or wrong, the error has a magnitude, and these measures reflect that.

Hopefully that will get you started.

like image 99
michaeltwofish Avatar answered Sep 21 '22 02:09

michaeltwofish


To elaborate on michaeltwofish's answer, some notes on the remaining values:

  • TP Rate: rate of true positives (instances correctly classified as a given class)

  • FP Rate: rate of false positives (instances falsely classified as a given class)

  • Precision: proportion of instances that are truly of a class divided by the total instances classified as that class

  • Recall: proportion of instances classified as a given class divided by the actual total in that class (equivalent to TP rate)

  • F-Measure: A combined measure for precision and recall calculated as 2 * Precision * Recall / (Precision + Recall)

As for the ROC area measurement, I agree with michaeltwofish that this is one of the most important values output by Weka. An "optimal" classifier will have ROC area values approaching 1, with 0.5 being comparable to "random guessing" (similar to a Kappa statistic of 0).

It should be noted that the "balance" of the data set needs to be taken into account when interpreting results. Unbalanced data sets in which a disproportionately large amount of instances belong to a certain class may lead to high accuracy rates even though the classifier may not necessarily be particularly good.

Further reading:

  • https://www.cs.auckland.ac.nz/courses/compsci367s1c/tutorials/IntroductionToWeka.pdf
  • http://en.wikipedia.org/wiki/Receiver_operating_characteristic#Basic_concept
  • http://en.wikipedia.org/wiki/Information_retrieval#Performance_and_correctness_measures
like image 20
Hybrid System Avatar answered Sep 18 '22 02:09

Hybrid System