Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to extract False Positive, False Negative from a confusion matrix of multiclass classification

I am classifying mnist data using following Keras code. From confusion_matrix command of sklearn.metrics i got confusion matrix and from TruePositive= sum(numpy.diag(cm1)) command i am able to get True Positive. But i am confuse how to get True Negative , False Positive, False Negative. I read solution from here but user comments confuse me. please help to code to get parameters.

from sklearn.metrics import confusion_matrix
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
import numpy as np
(x_train, y_train), (x_test, y_test) = mnist.load_data()
batch_size = 128
num_classes = 10
epochs = 1
img_rows, img_cols = 28, 28
y_test1=y_test

if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255

y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
                 activation='relu',
                 input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
#model.add(GlobalAveragePooling2D())
#model.add(GlobalMaxPooling2D())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.binary_crossentropy,
              optimizer=keras.optimizers.Adadelta(),
              metrics=['accuracy'])



model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=epochs,
          verbose=1,
          validation_data=(x_test, y_test))

pre_cls=model.predict_classes(x_test)

cm1 = confusion_matrix(y_test1,pre_cls)
print('Confusion Matrix : \n', cm1)

TruePositive= sum(np.diag(cm1))
like image 358
Hitesh Avatar asked Dec 20 '17 05:12

Hitesh


People also ask

How do you get a false positive rate from confusion matrix?

False positive rate (FPR) is calculated as the number of incorrect positive predictions divided by the total number of negatives. The best false positive rate is 0.0 whereas the worst is 1.0. It can also be calculated as 1 – specificity.

How do you interpret confusion matrix for multiclass classification?

Confusion Matrix is used to know the performance of a Machine learning classification. It is represented in a matrix form. Confusion Matrix gives a comparison between Actual and predicted values. The confusion matrix is a N x N matrix, where N is the number of classes or outputs.

What is false negative in multiclass classification?

A false positive is an outcome where the model incorrectly predicts the positive class. And a false negative is an outcome where the model incorrectly predicts the negative class. In the following sections, we'll look at how to evaluate classification models using metrics derived from these four outcomes.

What is false positive and false negative in confusion matrix?

false positives (FP): We predicted yes, but they don't actually have the disease. (Also known as a "Type I error.") false negatives (FN): We predicted no, but they actually do have the disease. (Also known as a "Type II error.")


1 Answers

First of all, you have omissions in your code - in order to run, I needed to add the following commands:

import keras
(x_train, y_train), (x_test, y_test) = mnist.load_data()

Having done that, and given the confusion matrix cm1:

array([[ 965,    0,    1,    0,    0,    2,    6,    1,    5,    0],
       [   0, 1113,    4,    2,    0,    0,    3,    0,   13,    0],
       [   8,    0,  963,   14,    5,    1,    7,    8,   21,    5],
       [   0,    0,    3,  978,    0,    7,    0,    6,   12,    4],
       [   1,    0,    4,    0,  922,    0,    9,    3,    3,   40],
       [   4,    1,    1,   27,    0,  824,    6,    1,   20,    8],
       [  11,    3,    1,    1,    5,    6,  925,    0,    6,    0],
       [   2,    6,   17,    8,    2,    0,    1,  961,    2,   29],
       [   5,    1,    2,   13,    4,    6,    2,    6,  929,    6],
       [   6,    5,    0,    7,    5,    6,    1,    6,   10,  963]])

here is how you can get the requested TP, FP, FN, TN per class:

The True Positives are simply the diagonal elements:

TruePositive = np.diag(cm1)
TruePositive
# array([ 965, 1113,  963,  978,  922,  824,  925,  961,  929,  963])

The False Positives are the sum of the respective column, minus the diagonal element:

FalsePositive = []
for i in range(num_classes):
    FalsePositive.append(sum(cm1[:,i]) - cm1[i,i])
FalsePositive
# [37, 16, 33, 72, 21, 28, 35, 31, 92, 92]

Similarly, the False Negatives are the sum of the respective row, minus the diagonal element:

FalseNegative = []
for i in range(num_classes):
    FalseNegative.append(sum(cm1[i,:]) - cm1[i,i])
FalseNegative
# [15, 22, 69, 32, 60, 68, 33, 67, 45, 46]

Now, the True Negatives are a little trickier; let's first think what exactly a True Negative means, with respect to, say class 0: it means all the samples that have been correctly identified as not being 0. So, essentially what we should do is remove the corresponding row & column from the confusion matrix, and then sum up all the remaining elements:

TrueNegative = []
for i in range(num_classes):
    temp = np.delete(cm1, i, 0)   # delete ith row
    temp = np.delete(temp, i, 1)  # delete ith column
    TrueNegative.append(sum(sum(temp)))
TrueNegative
# [8998, 8871, 9004, 8950, 9057, 9148, 9040, 9008, 8979, 8945]

Let's make a sanity check: for each class, the sum of TP, FP, FN, and TN must be equal to the size of our test set (here 10,000): let's confirm that this is indeed the case:

l = len(y_test)
for i in range(num_classes):
    print(TruePositive[i] + FalsePositive[i] + FalseNegative[i] + TrueNegative[i] == l)

The result is

True
True
True
True
True
True
True
True
True
True
like image 193
desertnaut Avatar answered Sep 28 '22 03:09

desertnaut