Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Help--100% accuracy with LibSVM?

Nominally a good problem to have, but I'm pretty sure it is because something funny is going on...

As context, I'm working on a problem in the facial expression/recognition space, so getting 100% accuracy seems incredibly implausible (not that it would be plausible in most applications...). I'm guessing there is either some consistent bias in the data set that it making it overly easy for an SVM to pull out the answer, =or=, more likely, I've done something wrong on the SVM side.

I'm looking for suggestions to help understand what is going on--is it me (=my usage of LibSVM)? Or is it the data?

The details:

  • About ~2500 labeled data vectors/instances (transformed video frames of individuals--<20 individual persons total), binary classification problem. ~900 features/instance. Unbalanced data set at about a 1:4 ratio.
  • Ran subset.py to separate the data into test (500 instances) and train (remaining).
  • Ran "svm-train -t 0 ". (Note: apparently no need for '-w1 1 -w-1 4'...)
  • Ran svm-predict on the test file. Accuracy=100%!

Things tried:

  • Checked about 10 times over that I'm not training & testing on the same data files, through some inadvertent command-line argument error
  • re-ran subset.py (even with -s 1) multiple times and did train/test only multiple different data sets (in case I randomly upon the most magical train/test pa
  • ran a simple diff-like check to confirm that the test file is not a subset of the training data
  • svm-scale on the data has no effect on accuracy (accuracy=100%). (Although the number of support vectors does drop from nSV=127, bSV=64 to nBSV=72, bSV=0.)
  • ((weird)) using the default RBF kernel (vice linear -- i.e., removing '-t 0') results in accuracy going to garbage(?!)
  • (sanity check) running svm-predict using a model trained on a scaled data set against an unscaled data set results in accuracy = 80% (i.e., it always guesses the dominant class). This is strictly a sanity check to make sure that somehow svm-predict is nominally acting right on my machine.

Tentative conclusion?:

Something with the data is wacked--somehow, within the data set, there is a subtle, experimenter-driven effect that the SVM is picking up on.

(This doesn't, on first pass, explain why the RBF kernel gives garbage results, however.)

Would greatly appreciate any suggestions on a) how to fix my usage of LibSVM (if that is actually the problem) or b) determine what subtle experimenter-bias in the data LibSVM is picking up on.

like image 508
severian Avatar asked Aug 23 '11 00:08

severian


People also ask

Is 100 accuracy possible in machine learning?

The answer is “NO”. A high accuracy measured on the training set is the result of Overfitting. So, what does this overfitting means? Overfitting occurs when our machine learning model tries to cover all the data points or more than the required data points present in the given dataset.

What is the accuracy of the SVM model accuracy * 100?

The SVM classification results without normalization were 84.61% and with normalization was 90.10%, while the KNN algorithm showed an accuracy of 64.83% and 81.31% with normalization.

What is a good accuracy for SVM?

Figure 2. The non-linear optimal hyperplane, which support-vector machine (SVM) can provide as a classification tool. Several studies have investigated SVM as a diagnostic tool for AD, and a number have shown good levels of accuracy (5–8).


1 Answers

Two other ideas:

Make sure you're not training and testing on the same data. This sounds kind of dumb, but in computer vision applications you should take care that: make sure you're not repeating data (say two frames of the same video fall on different folds), you're not training and testing on the same individual, etc. It is more subtle than it sounds.

Make sure you search for gamma and C parameters for the RBF kernel. There are good theoretical (asymptotic) results that justify that a linear classifier is just a degenerate RBF classifier. So you should just look for a good (C, gamma) pair.

like image 80
carlosdc Avatar answered Oct 25 '22 01:10

carlosdc