I'm using MNIST example with 60000 training image and 10000 testing image. How do I find which of the 10000 testing image that has an incorrect classification/prediction?
Evaluating models can be streamlined through a couple of simple methods that yield stats that you can reference later. If you've ever read a research paper - you've heard of a model's accuracy, weighted accuracy, recall (sensitivity), specificity, or precision.
predict_class will return the index of the class having maximum value. For example, if cat is 0.6 and dog is 0.4, it will return 0 if the class cat is at index 0)
Simply use model.predict_classes()
and compare the output with true labes. i.e:
incorrects = np.nonzero(model.predict_class(X_test).reshape((-1,)) != y_test)
to get indices of incorrect predictions
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With