Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

sklearn svm area under ROC less than 0.5 for training data

I am using sklearn v 0.13.1 svm in order to try and solve a binary classification problem. I use kfold cross validation and compute the area under the roc curve (roc_auc) to test the quality of my model. However, for some folds the roc_auc is less than 0.5, even for the training data. Shouldn't that be impossible? Shouldn't it always be possible for the algorithm to at least reach 0.5 on the data it is being trained on?

Here's my code:

classifier = svm.SVC(kernel='poly', degree=3, probability=True, max_iter=100000)
kf = cross_validation.KFold(len(myData), n_folds=3, indices=False)
for train, test in kf:
    Fit = classifier.fit(myData[train], classVector[train])

    probas_ = Fit.predict_proba(myData[test])
    fpr, tpr, thresholds = roc_curve(classVector[test], probas_[:,1])
    roc_auc = auc(fpr, tpr)

    probas_ = Fit.predict_proba(myData[train])
    fpr2, tpr2, thresholds2 = roc_curve(classVector[train], probas_[:,1])
    roc_auc2 = auc(fpr2, tpr2)

    print "Training auc: ", roc_auc2, " Testing auc: ", roc_auc

The output looks like this:

    Training auc: 0.423920939062  Testing auc: 0.388436883629
    Training auc: 0.525472613736  Testing auc: 0.565581854043
    Training auc: 0.470917930528  Testing auc: 0.259344660194

Is the results of an area under the curve less than 0.5 meaningful? In principle, if both the train and test values are <0.5 I could just invert the prediction for every point, but I am worried somthing is going wrong. I thought that even if I gave it completely random data, the algorithm should reach 0.5 on the training data?

like image 303
user3276811 Avatar asked Feb 05 '14 20:02

user3276811


People also ask

What does an AUC of 0.5 mean?

This ROC curve has an AUC of 0.5, meaning it ranks a random positive example higher than a random negative example 50% of the time. As such, the corresponding classification model is basically worthless, as its predictive ability is no better than random guessing.

How can I improve my AUC ROC score?

In order to improve AUC, it is overall to improve the performance of the classifier. Several measures could be taken for experimentation. However, it will depend on the problem and the data to decide which measure will work. (1) Feature normalization and scaling.

What is a good ROC AUC score?

The area under the ROC curve (AUC) results were considered excellent for AUC values between 0.9-1, good for AUC values between 0.8-0.9, fair for AUC values between 0.7-0.8, poor for AUC values between 0.6-0.7 and failed for AUC values between 0.5-0.6.

How do you interpret ROC AUC scores?

AUC represents the probability that a random positive (green) example is positioned to the right of a random negative (red) example. AUC ranges in value from 0 to 1. A model whose predictions are 100% wrong has an AUC of 0.0; one whose predictions are 100% correct has an AUC of 1.0.


1 Answers

Indeed you could invert your predictions, and this is why your AUROCs are < 0.5. It is normally not a problem to do so, just make sure to be consistent and either always or never reverse them. Make sure you do that both on the training and test sets.

The reason for this problem could be that the classifier.fit or the roc_curve functions misinterpreted the classVector you passed. It is probably better to fix that instead - read their doc to learn what data they expect exactly. In particular, you didn't specify what label is positive. See the pos_label argument to roc_curve and make sure y_true was properly specified.

However, what is worrisome is that some of your AUROCs are > 0.5 on the training set, and most of them are close to it. It probably means that your classifier performs not much better than random.

like image 169
Calimo Avatar answered Oct 16 '22 03:10

Calimo