Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

confusion matrix error "Classification metrics can't handle a mix of multilabel-indicator and multiclass targets"

I am getting a

Classification metrics can't handle a mix of multilabel-indicator and multiclass targets

error when I try to use confusion matrix.

I am doing my first deep learning project. I am new to it. I am using the mnist dataset provided by keras. I have trained and tested my model successfully.

However, when I try to use the scikit learn confusion matrix I get the error stated above. I have searched for an answer and while there are answers on this error, none of them worked for me. From what I found online it probably has something to do with the loss function (I use the categorical_crossentropy in my code). I tried changing it to sparse_categorical_crossentropy but that just gave me the

Error when checking target: expected dense_2 to have shape (1,) but got array with shape (10,)

when I run the fit() function on the model.

This is the code. (I have left out the imports for the sake of brevity)

model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(28 * 28,)))
model.add(Dense(10, activation='softmax')) 

model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy'])

(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

train_images = train_images.reshape((60000, 28 * 28))
train_images = train_images.astype('float32') / 255

test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255

train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)

model.fit(train_images, train_labels, epochs=10, batch_size=128)

rounded_predictions = model.predict_classes(test_images, batch_size=128, verbose=0)

cm = confusion_matrix(test_labels, rounded_predictions)

How can i fix this?

like image 883
Emmanuel Avatar asked Feb 08 '19 09:02

Emmanuel


2 Answers

Confusion matrix needs both labels & predictions as single-digits, not as one-hot encoded vectors; although you have done this with your predictions using model.predict_classes(), i.e.

rounded_predictions = model.predict_classes(test_images, batch_size=128, verbose=0)
rounded_predictions[1]
# 2

your test_labels are still one-hot encoded:

test_labels[1]
# array([0., 0., 1., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)

So, you should convert them too to single-digit ones, as follows:

import numpy as np
rounded_labels=np.argmax(test_labels, axis=1)
rounded_labels[1]
# 2

After which, the confusion matrix should come up OK:

from sklearn.metrics import confusion_matrix
cm = confusion_matrix(rounded_labels, rounded_predictions)
cm
# result:
array([[ 971,    0,    0,    2,    1,    0,    2,    1,    3,    0],
       [   0, 1121,    2,    1,    0,    1,    3,    0,    7,    0],
       [   5,    4,  990,    7,    5,    3,    2,    7,    9,    0],
       [   0,    0,    0,  992,    0,    2,    0,    7,    7,    2],
       [   2,    0,    2,    0,  956,    0,    3,    3,    2,   14],
       [   3,    0,    0,   10,    1,  872,    3,    0,    1,    2],
       [   5,    3,    1,    1,    9,   10,  926,    0,    3,    0],
       [   0,    7,   10,    1,    0,    2,    0,  997,    1,   10],
       [   5,    0,    3,    7,    5,    7,    3,    4,  937,    3],
       [   5,    5,    0,    9,   10,    3,    0,    8,    3,  966]])
like image 141
desertnaut Avatar answered Sep 27 '22 22:09

desertnaut


The same problem is repeated here, and the solution is overall the same. That's why, that question is closed and unable to receive an answer. So I like to add an answer to this question here (hope that's not illegal).

The below code is self-explanatory. @desertnaut gave exact reasons, so no need to explain more stuff. The author of the question tried to pass predicted features separately to the fit functions, which I believe can give a better understanding to the newcomer.

import numpy as np
import pandas as pd 
import tensorflow as tf 
from sklearn.model_selection import train_test_split
from tensorflow.keras.applications.resnet50 import ResNet50

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()

x_train = np.expand_dims(x_train, axis=-1)
x_train = np.repeat(x_train, 3, axis=-1)
x_train = x_train.astype('float32') / 255
y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)

print(x_train.shape, y_train.shape)
# (60000, 28, 28, 3) (60000, 10)

Extract features from pre-trained weights (Transfer Learning).

base_model = ResNet50(weights='imagenet', include_top=False)
pred_x_train = base_model.predict(x_train)
pred_x_train.shape
# (60000, 1, 1, 2048)

Reshape for further training process.

pred_x_train = pred_x_train.reshape(60000, 1*1*2048)
pred_x_train.shape
# (60000, 2048)

The model with sequential API.

model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(512, activation='relu', input_shape=(2048,)))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(10, activation='softmax'))

Compile and Run.

model.compile(loss='categorical_crossentropy',optimizer='Adam',metrics=['accuracy'])
model.fit(pred_x_train, y_train, epochs=2, verbose=2)

Epoch 1/2
1875/1875 - 4s - loss: 0.6993 - accuracy: 0.7744
Epoch 2/2
1875/1875 - 4s - loss: 0.4451 - accuracy: 0.8572

Evaluate.

from sklearn.metrics import classification_report

# predict 
pred = model.predict(pred_x_train, batch_size = 32)
pred = np.argmax(predictions, axis=1)
# label
y_train = np.argmax(y_train, axis=1)

print(y_train.shape, pred.shape)
print(y_train[:5], pred[:5])
# (60000,) (60000,)
# [5 0 4 1 9] [5 0 4 1 9]
print(classification_report(y_train, pred))

              precision    recall  f1-score   support

           0       0.95      0.97      0.96      5923
           1       0.97      0.99      0.98      6742
           2       0.90      0.94      0.92      5958
           3       0.89      0.91      0.90      6131
           4       0.97      0.89      0.93      5842
           5       0.88      0.91      0.89      5421
           6       0.95      0.97      0.96      5918
           7       0.94      0.95      0.94      6265
           8       0.94      0.78      0.85      5851
           9       0.87      0.93      0.90      5949

    accuracy                           0.93     60000
   macro avg       0.93      0.92      0.92     60000
weighted avg       0.93      0.93      0.92     60000
like image 21
M.Innat Avatar answered Sep 27 '22 22:09

M.Innat