After fitting the model (which was running for a couple of hours), I wanted to get the accuracy with the following code:
train_loss=hist.history['loss']
val_loss=hist.history['val_loss']
train_acc=hist.history['acc']
val_acc=hist.history['val_acc']
xc=range(nb_epoch)
of the trained model, but was getting an error, which is caused by the deprecated methods I was using.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-233-081ed5e89aa4> in <module>()
3 train_loss=hist.history['loss']
4 val_loss=hist.history['val_loss']
----> 5 train_acc=hist.history['acc']
6 val_acc=hist.history['val_acc']
7 xc=range(nb_epoch)
KeyError: 'acc'
The code I used to fit the model before trying to read the accuracy, is the following:
hist = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_data=(X_test, Y_test))
hist = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_split=0.2)
Which produces this output when running it:
Epoch 1/20
237/237 [==============================] - 104s 440ms/step - loss: 6.2802 - val_loss: 2.4209
.....
.....
.....
Epoch 19/20
189/189 [==============================] - 91s 480ms/step - loss: 0.0590 - val_loss: 0.2193
Epoch 20/20
189/189 [==============================] - 85s 451ms/step - loss: 0.0201 - val_loss: 0.2312
I've noticed that I was running deprecated methods & arguments.
So how can I read the accuracy and val_accuracy without having to fit again, and waiting for a couple of hours again? I tried to replace train_acc=hist.history['acc']
with train_acc=hist.history['accuracy']
but it didn't help.
Accuracy(name="accuracy", dtype=None) Calculates how often predictions equal labels. This metric creates two local variables, total and count that are used to compute the frequency with which y_pred matches y_true .
You need to create the accuracy yourself in model_fn using tf. metrics. accuracy and pass it to eval_metric_ops that will be returned by the function. Then the output of estimator.
If the model made a total of 530/550 correct predictions for the Positive class, compared to just 5/50 for the Negative class, then the total accuracy is (530 + 5) / 600 = 0.8917 . This means the model is 89.17% accurate.
You probably didn't add "acc" as a metric when compiling the model.
model.compile(optimizer=..., loss=..., metrics=['accuracy',...])
You can get the metrics and loss from any data without training again with:
model.evaluate(X, Y)
add a metrics = ['accuracy'] when you compile the model
simply get the accuracy of the last epoch . hist.history.get('acc')[-1]
what i would do actually is use a GridSearchCV and then get the best_score_ parameter to print the best metrics
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With