In Keras test sample evaluation is done like this
score = model.evaluate(testx, testy, verbose=1)
This does not return predicted values. There is a method predict
which return predicted values
model.predict(testx, verbose=1)
returns
[
[.57 .21 .21]
[.19 .15 .64]
[.23 .16 .60]
.....
]
testy
is one hot encode and its values are like this
[
[1 0 0]
[0 0 1]
[0 0 1]
]
How can the predicted values like testy
or how to convert the predicted values to one hot encoded?
note: my model looks like this
# setup the model, add layers
model = Sequential()
model.add(Conv2D(32, kernel_size=(3,3), activation='relu', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(classes, activation='softmax'))
# compile model
model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])
# fit the model
model.fit(trainx, trainy, batch_size=batch_size, epochs=iterations, verbose=1, validation_data=(testx, testy))
The first argument for session. run() should be tensor you want to get. In your case it should be prediction tensor (so you need to return it from your train_neural_network ). Apply argmax to it to obtain predicted label.
Model. predict passes the input vector through the model and returns the output tensor for each datapoint. Since the last layer in your model is a single Dense neuron, the output for any datapoint is a single value. And since you didn't specify an activation for the last layer, it will default to linear activation.
h5 file into memory. This sets up the entire neural network in memory along with the weights assigned to each layer. Now, to do your predictions on unseen data, load the data, let it be one or more items, into the memory.
The values being returned are probabilities of each class. Those values can be useful because they indicates the model's level of confidence.
If you are only interested in the class with the highest probability:
For example[.19 .15 .64]
= 2
(because index 2 in the list is largest)
Let the model to it
Tensorflow models have a built in method that returns the index of the highest class probability.
model.predict_classes(testx, verbose=1)
Do it manually
argmax is a generic function to return the index of the highest value in a sequence.
import tensorflow as tf
# Create a session
sess = tf.InteractiveSession()
# Output Values
output = [[.57, .21, .21], [.19, .15, .64], [.23, .16, .60]]
# Index of top values
indexes = tf.argmax(output, axis=1)
print(indexes.eval()) # prints [0 2 2]
Keras returns a np.ndarray with the normalized likelihood of class labels.
So, if you want to transform this into a onehotencoding, you will need to find the indices of the maximum likelihood per row, this can be done by using np.argmax
along axis=1. Then, to transform this into a onehotencoding, the np.eye
functionality can be used. This will place a 1 at the indices specified. The only care to be taken, is to dimensionalize to appropriate row length.
a #taken from your snippet
Out[327]:
array([[ 0.57, 0.21, 0.21],
[ 0.19, 0.15, 0.64],
[ 0.23, 0.16, 0.6 ]])
b #onehotencoding for this array
Out[330]:
array([[1, 0, 0],
[0, 0, 1],
[0, 0, 1]])
n_values = 3; c = np.eye(n_values, dtype=int)[np.argmax(a, axis=1)]
c #Generated onehotencoding from the array of floats. Also works on non-square matrices
Out[332]:
array([[1, 0, 0],
[0, 0, 1],
[0, 0, 1]])
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With