Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between keras.metrics.Accuracy() and "accuracy"

I have been testing different approaches in building nn models (tensorflow, keras) and I saw that there was something strange with metric during compile model.

I checked two ways:

    model.compile(
        loss=keras.losses.CategoricalCrossentropy(),
        optimizer=keras.optimizers.Adam(),
        metrics=keras.metrics.Accuracy()
    )

and

    model.compile(
        loss=keras.losses.CategoricalCrossentropy(),
        optimizer=keras.optimizers.Adam(),
        metrics=["accuracy"]
    )

Result of first approach:

    Epoch 1/2
    1875/1875 - 2s - loss: 0.0494 - accuracy: 0.0020
    Epoch 2/2
    1875/1875 - 2s - loss: 0.0401 - accuracy: 0.0030

    <tensorflow.python.keras.callbacks.History at 0x7f9c00bc06d8>

Result of second approach:

    Epoch 1/2
    1875/1875 - 2s - loss: 0.0368 - accuracy: 0.9884
    Epoch 2/2
    1875/1875 - 2s - loss: 0.0303 - accuracy: 0.9913

    <tensorflow.python.keras.callbacks.History at 0x7f9bfd7d35c0>

This is quite strange, I thought that "accuracy" is exactly the same as keras.metrics.Accuracy(). At least this is the case in arguments "loss" and "optimizer", e.g. "adam" is the same as keras.optimizers.Adam(). Does anybody know why is this so weird or I missed something?

Edit:

Approach with metric in [] gives strange results too:

    model.compile(
        loss=keras.losses.CategoricalCrossentropy(),
        optimizer=keras.optimizers.Adam(),
        metrics=[keras.metrics.Accuracy()]
    )

    Epoch 1/2
    1875/1875 - 2s - loss: 0.2996 - accuracy: 0.0000e+00
    Epoch 2/2
    1875/1875 - 2s - loss: 0.1431 - accuracy: 1.8333e-05

    <tensorflow.python.keras.callbacks.History at 0x7f9bfd1045f8>

like image 883
Pav3k Avatar asked Jan 01 '26 19:01

Pav3k


1 Answers

When you are mentioning keras.metrics.Accuracy() you are explicitly asking the library to calculate the metric Accuracy which is simple comparison between how many target values matches the predicted values.

However, when you mention the string accuracy then depending on the type of loss you have chosen a different Metric gets selected. This is what is mentioned in the documentation of Keras,

When you pass the strings 'accuracy' or 'acc', we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the loss function used and the model output shape. We do a similar conversion for the strings 'crossentropy' and 'ce' as well.

Hence, as CategoricalCrossEntropy is the loss so CategoricalAccuracy gets calculated in case 2. This is based on finding argmax and then comparing the one-hot encoding. As a result, you are seeing better accuracy values in case 2 and very bad in case 1.

So the string accuracy will not always mean the metric function Accuracy().

The explanation of different metrics for reference, https://keras.io/api/metrics/accuracy_metrics/

The explanation of argument metrics for reference, https://www.tensorflow.org/api_docs/python/tf/keras/Model#compile

like image 149
ranka47 Avatar answered Jan 03 '26 13:01

ranka47



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!