Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to use Tensorflow addons' metrics correctly in functional API?

I have an LSTM model to perform binary classification of human activities using multivariate smartphone sensor data. The two classes are imbalanced (1:50). Therefore I would like to use F1-score as a metric, but I saw that it was deprecated as a metric.

Before it was best practice to use a callback function for the metric to ensure it was applied on the whole dataset, however, recently the TensorFlow addons reintroduced the F1-Score.

I now have a problem to apply this score to my functional API. Here is the code I am currently running:

import tensorflow as tf 
import tensorflow_addons as tfa
from tensorflow import kerasdef

create_model(n_neurons=150, learning_rate=0.01, activation="relu", loss="binary_crossentropy"):

   #create input layer and assign to current output layer
   input_ = keras.layers.Input(shape=(X_train.shape[1],X_train.shape[2])) 

   #add LSTM layer
   lstm = keras.layers.LSTM(n_neurons, activation=activation)(input_)

   #Output Layer
   output = keras.layers.Dense(1, activation="sigmoid")(lstm)

   #Create Model
   model = keras.models.Model(inputs=[input_], outputs=[output])

   #Add optimizer
   optimizer=keras.optimizers.SGD(lr=learning_rate, clipvalue=0.5)

   #Compile model
   model.compile(loss=loss, optimizer=optimizer, metrics=[tfa.metrics.F1Score(num_classes=2, average="micro")])

   print(model.summary())

   return model

#Create the model
model = create_model()

#fit the model
history = model.fit(X_train,y_train, 
                epochs=300, 
                validation_data=(X_val, y_val))

If I use another value for the metric argument average (e.g., average=None or average="macro") then I get an error message when fitting the model:

ValueError: Dimension 0 in both shapes must be equal, but are 2 and 1. Shapes are [ 2 ] and [ 1 ]. for 'AssignAddVariableOp' (op: 'AssignAddVariableOp') with input shapes: [ ], [ 1 ].

And if I use the value average="micro" I am not getting the error, but the F1-score is 0 throughout the learning process, while my loss decreases.

I believe I am still doing something wrong here. Can anybody provide an explanation for me?

like image 487
JoeBe Avatar asked Dec 27 '19 06:12

JoeBe


1 Answers

Updated answer: The crucial bit is to import tf.keras, not keras. Then you can use e.g. tf.keras.metrics.Precision or tfa.metrics.F1Score without problems. See also here.

Old answer: The problem with tensorflow-addons is that the implementation of the current release (0.6.0) only counts exact matches, such that a comparison e.g. of 1 and 0.99 yields 0. Of course, this is practically useless in a neural network. This has been fixed in 0.7.0 (not yet released). You can install it as follows:

pip3 install --upgrade pip
pip3 install tfa-nightly

and then use a threshold (everything below the threshold is counted as 0, otherwise as 1):

tfa.metrics.FBetaScore(num_classes=2,average="micro",threshold=0.9)

See also https://github.com/tensorflow/addons/issues/490. The problem with other values for average is discussed here: https://github.com/tensorflow/addons/issues/746.

Beware that there are two other problems that probably lead to useless results, see also https://github.com/tensorflow/addons/issues/818:

  1. the model uses binary classification, but f1-score in tfa assumes categorical classification with one-hot encoding
  2. f1-score is called at each batch step at validation.

These problems should not appear when using the Keras metrics.

like image 97
tillmo Avatar answered Sep 22 '22 20:09

tillmo