Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Classification metrics can't handle a mix of binary and continuous targets [duplicate]

I try to train and test several scikit-learn models and attempt to print off the accuracy. Only some of these models work, others fail with the

ValueError: Classification metrics can't handle a mix of binary and continuous targets. 

What does this error mean? How can I modify my code below to successfully evaluate the models that are failing?

import pandas as pd
import numpy as np

import matplotlib.pyplot as plt
from sklearn import model_selection
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score

from sklearn import linear_model
from sklearn import svm

from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC

from sklearn import preprocessing
from sklearn import utils

# Shuffle pandas rows randomly
from sklearn.utils import shuffle

# Disable annoying warnings
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.simplefilter(action='ignore', category=UserWarning)

# Performance
import time

# Import the data and assign the column names
colNames = []
for colName in range(0,3780):
    colNames.append("%s"%(colName))
colNames.append('class')
df = pd.read_csv("HoGTestData.csv", names=colNames)

# Randomly shuffle rows
df = shuffle(df)
df = df.head(20)

# Print some info on the dataset
print("Head of Data:")
print(df.head())
print("Shape of Data:")
print(df.shape)

# descriptions
print("Describe Data:")
#print(df.describe())

# class distribution
print(df.groupby('class').size())

# Split-out validation dataset
datasetData = df.values

# Determine shape and portion of data that is real data as opposed to labels
shape = datasetData.shape
thresh = int(shape[1]) - 1

# Extract labels and feature vectors
featureVectors = datasetData[:,0:thresh]
labels = datasetData[:,thresh:]

# Perform a standard scaler on the data
scaler = preprocessing.StandardScaler()
featureVectors = scaler.fit_transform(featureVectors)

# Encode labels to be acceptable
labelEncoder = preprocessing.LabelEncoder()
labels = labelEncoder.fit_transform(labels)

# Split data into training and testing data
test_size = 0.20
seed = 7
featureVectorTrain, featureVectorTest, labelsTrain, labelsTest = model_selection.train_test_split(featureVectors, labels, test_size=test_size, random_state=seed)

# Spot Check Algorithms
models = []
models.append(('SVM', svm.SVC()))
models.append(('LR', LogisticRegression()))
models.append(('LDA', LinearDiscriminantAnalysis()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('CART', DecisionTreeClassifier()))
models.append(('NB', GaussianNB()))
#models.append(('SGDRegressor', linear_model.SGDRegressor())) #ValueError: Classification metrics can't handle a mix of binary and continuous targets
#models.append(('BayesianRidge', linear_model.BayesianRidge())) #ValueError: Classification metrics can't handle a mix of binary and continuous targets
#models.append(('LassoLars', linear_model.LassoLars())) #ValueError: Classification metrics can't handle a mix of binary and continuous targets
#models.append(('ARDRegression', linear_model.ARDRegression())) #ValueError: Classification metrics can't handle a mix of binary and continuous targets
#models.append(('PassiveAggressiveRegressor', linear_model.PassiveAggressiveRegressor())) #ValueError: Classification metrics can't handle a mix of binary and continuous targets
#models.append(('TheilSenRegressor', linear_model.TheilSenRegressor())) #ValueError: Classification metrics can't handle a mix of binary and continuous targets
#models.append(('LinearRegression', linear_model.LinearRegression())) #ValueError: Classification metrics can't handle a mix of binary and continuous targets

# Test options and evaluation metric
seed = 42
scoring = 'accuracy'

# evaluate each model in turn
results = []
names = []
print("---------------------------------------")
for name, model in models:
    start_time = time.time()
    kfold = model_selection.KFold(n_splits=10, random_state=seed)
    cv_results = model_selection.cross_val_score(model, featureVectorTrain, labelsTrain, cv=kfold, scoring=scoring)
    elapsed_time = time.time() - start_time
    results.append(cv_results)
    names.append(name)
    msg = "{:3.2f} ({:3.2f})  Time elapsed: {:6.2f}".format(cv_results.mean(), cv_results.std(), elapsed_time)
    msg = "%s "%(name) + msg
    print(msg)
print("---------------------------------------")

print("Done")

Here is the script output:

Head of Data:
              0         1         2  ...        3778      3779  class
20573  0.124282  0.090376  0.088723  ...    0.148411  0.120542     -1
20461  0.154031  0.110177  0.087799  ...    0.100416  0.119484     -1
10416  0.340767  0.150863  0.025489  ...    0.047592  0.036171      1
52404  0.000000  0.000000  0.000000  ...    0.000000  0.000000     -1
42785  0.159105  0.118963  0.090405  ...    0.009996  0.027460     -1

[5 rows x 3781 columns]
Shape of Data:
(1024, 3781)
Describe Data:
class
-1    794
 1    230
dtype: int64
---------------------------------------
SVM 0.9878 (0.0123)  Time elapsed:  10.20
LR 0.9414 (0.0187)  Time elapsed:   7.09
LDA 0.9768 (0.0128)  Time elapsed:   6.60
KNN 0.8511 (0.0384)  Time elapsed:   3.06
CART 0.9047 (0.0358)  Time elapsed:   8.84
NB 0.9292 (0.0209)  Time elapsed:   0.36
---------------------------------------
Done

Here is the labelsTrain variable:

print(labelsTrain)
[0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 1 1 1 0 1 1 0 0
 1 1 0 1 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
 0 0 1 1 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 1 0 1 0 0 1 0 0 0 1 1 0 0 0
 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 1 1 1 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1
 1 0 1 1 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 1
 0 1 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 0
 0 0 0 0 1 1 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 1 0 1 0 1 1 0 1 0 0 0 0 0 0 1 0 1
 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0
 0 1 0 0 0 1 0 1 0 0 1 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0
 0 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0
 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0
 0 0 0 1 1 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0
 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0
 1 0 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 1
 0 0 0 0 1 0 1 0 0 1 1 1 1 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0
 0 0 1 1 0 0 1 0 0 0 0 0 0 1 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0
 0 0 1 0 0]

The errors happen during the cross_val_score function:

# evaluate each model in turn
results = []
names = []
print("---------------------------------------")
for name, model in models:
    start_time = time.time()
    kfold = model_selection.KFold(n_splits=10, random_state=seed)
    print("start cross_val_score")
    cv_results = model_selection.cross_val_score(model, featureVectorTrain, labelsTrain, cv=kfold, scoring=scoring)
    print("done cross_val_score")
    elapsed_time = time.time() - start_time
    results.append(cv_results)
    #print(results)
    names.append(name)
    msg = "{:3.4f} ({:3.4f})  Time elapsed: {:6.2f}".format(cv_results.mean(), cv_results.std(), elapsed_time)
    msg = "%s "%(name) + msg
    print(msg)
print("---------------------------------------")

... 

---------------------------------------
start cross_val_score
done cross_val_score
SVM 0.9744 (0.0127)  Time elapsed:  10.46
start cross_val_score
done cross_val_score
LR 0.9194 (0.0390)  Time elapsed:   9.56
start cross_val_score
done cross_val_score
LDA 0.9780 (0.0106)  Time elapsed:   8.04
start cross_val_score
done cross_val_score
KNN 0.8657 (0.0319)  Time elapsed:   3.20
start cross_val_score
done cross_val_score
CART 0.9072 (0.0326)  Time elapsed:  10.20
start cross_val_score
done cross_val_score
NB 0.9182 (0.0327)  Time elapsed:   0.38
start cross_val_score
Traceback (most recent call last):
  File "/Users/me/Desktop/MachineLearning/Initial.py", line 112, in <module>
    cv_results = model_selection.cross_val_score(model, featureVectorTrain, labelsTrain, cv=kfold, scoring=scoring)
  File "/usr/local/lib/python3.7/site-packages/sklearn/model_selection/_validation.py", line 402, in cross_val_score
    error_score=error_score)
  File "/usr/local/lib/python3.7/site-packages/sklearn/model_selection/_validation.py", line 240, in cross_validate
    for train, test in cv.split(X, y, groups))
  File "/usr/local/lib/python3.7/site-packages/sklearn/externals/joblib/parallel.py", line 917, in __call__
    if self.dispatch_one_batch(iterator):
  File "/usr/local/lib/python3.7/site-packages/sklearn/externals/joblib/parallel.py", line 759, in dispatch_one_batch
    self._dispatch(tasks)
  File "/usr/local/lib/python3.7/site-packages/sklearn/externals/joblib/parallel.py", line 716, in _dispatch
    job = self._backend.apply_async(batch, callback=cb)
  File "/usr/local/lib/python3.7/site-packages/sklearn/externals/joblib/_parallel_backends.py", line 182, in apply_async
    result = ImmediateResult(func)
  File "/usr/local/lib/python3.7/site-packages/sklearn/externals/joblib/_parallel_backends.py", line 549, in __init__
    self.results = batch()
  File "/usr/local/lib/python3.7/site-packages/sklearn/externals/joblib/parallel.py", line 225, in __call__
    for func, args, kwargs in self.items]
  File "/usr/local/lib/python3.7/site-packages/sklearn/externals/joblib/parallel.py", line 225, in <listcomp>
    for func, args, kwargs in self.items]
  File "/usr/local/lib/python3.7/site-packages/sklearn/model_selection/_validation.py", line 568, in _fit_and_score
    test_scores = _score(estimator, X_test, y_test, scorer, is_multimetric)
  File "/usr/local/lib/python3.7/site-packages/sklearn/model_selection/_validation.py", line 605, in _score
    return _multimetric_score(estimator, X_test, y_test, scorer)
  File "/usr/local/lib/python3.7/site-packages/sklearn/model_selection/_validation.py", line 635, in _multimetric_score
    score = scorer(estimator, X_test, y_test)
  File "/usr/local/lib/python3.7/site-packages/sklearn/metrics/scorer.py", line 98, in __call__
    **self._kwargs)
  File "/usr/local/lib/python3.7/site-packages/sklearn/metrics/classification.py", line 176, in accuracy_score
    y_type, y_true, y_pred = _check_targets(y_true, y_pred)
  File "/usr/local/lib/python3.7/site-packages/sklearn/metrics/classification.py", line 81, in _check_targets
    "and {1} targets".format(type_true, type_pred))
ValueError: Classification metrics can't handle a mix of binary and continuous targets
like image 743
PhilBot Avatar asked Jan 30 '19 22:01

PhilBot


2 Answers

I have used a few models for stacking using the vecstack and set needs_proba=True and then got this error. I solved it by changing the metric inside the stacking. because stacking use class prediction by default, so in case you want to have probabilities you should change the metric as well. I have defined a new function as metric:

def get_classification_metric(testy, probs):
    from sklearn.metrics import precision_recall_curve
    precision, recall, thresholds = precision_recall_curve(testy, probs[:,1])
    # convert to f score
    fscore = (2 * precision * recall) / (precision + recall)
    # locate the index of the largest f score
    ix = np.argmax(fscore)
    return fscore[ix]

This function finds the highest F1 score at optimal threshold. So only need to set metric=get_classification_metric inside the stacking function.

like image 158
RyanAbnavi Avatar answered Nov 16 '22 06:11

RyanAbnavi


All your commented-out models are not classifiers but regression models, for which accuracy is meaningless.

You get the error because these regression models do not produce binary outcomes, but continuous (float) numbers (as all regression models do); so, when scikit-learn attempts to calculate the accuracy by comparing a binary number (true label) with a float (predicted value), it not unexpectedly gives an error. And this cause is clearly hinted at the error message itself:

Classification metrics can't handle a mix of binary and continuous target
like image 13
desertnaut Avatar answered Nov 16 '22 06:11

desertnaut