Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the difference of roc_auc values in sklearn

I am using sklearn for my binary classification dataset using 10-fold cross validation as follows. I use "accuracy", "precision_weighted", "recall_weighted", "f1_weighted" to measure my results.

scores = cross_validate(clf, X, y, cv=k_fold, scoring=("accuracy", "precision_weighted", "recall_weighted", "f1_weighted"))
print(scores)

I want to get the roc_auc results as well. When I check sklearn they provide the following options.

‘roc_auc’
‘roc_auc_ovr’
‘roc_auc_ovo’
‘roc_auc_ovr_weighted’
‘roc_auc_ovo_weighted’

Since, I am using weighted measures for the precision, recall and f-measure, I am thinking that I should use either roc_auc_ovr_weighted, roc_auc_ovo_weighted. However, I am not clear what is the difference between them. Please suggest me a suitable one that suits binary classification problems.

I am happy to provide more details if needed.

like image 946
EmJ Avatar asked Oct 30 '25 10:10

EmJ


2 Answers

From the documentation roc_auc is the only one suitable for binary classification. The weighted, ovr and ovo are use for multi-class problems.

like image 175
Robert King Avatar answered Nov 01 '25 00:11

Robert King


you should use 'balanced_accuracy' as well to be consistent with "precision_weighted", "recall_weighted" and "f1_weighted"

like image 20
Alexandre Kolisnyk Avatar answered Nov 01 '25 00:11

Alexandre Kolisnyk



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!