I have built a sequential model with a customized f1 score metric. I pass this during the compilation of my model and then save it in *.hdf5
format. Whenever I load the model for testing purposes using the custom_objects
attribute
model = load_model('app/model/test_model.hdf5', custom_objects={'f1':f1})
Keras throws the following error
TypeError: '<' not supported between instances of 'function' and 'str'
Note: No errors are shown if I don't include the f1 metric during compilation, and the testing process works well.
Train method
from metrics import f1
...
# GRU with glove embeddings and two dense layers
model = Sequential()
model.add(Embedding(len(word_index) + 1,
100,
weights=[embedding_matrix],
input_length=max_len,
trainable=False))
model.add(SpatialDropout1D(0.3))
model.add(GRU(100, dropout=0.3, recurrent_dropout=0.3, return_sequences=True))
model.add(GRU(100, dropout=0.3, recurrent_dropout=0.3))
model.add(Dense(1024, activation='relu'))
#model.add(Dropout(0.8))
model.add(Dense(1024, activation='relu'))
#model.add(Dropout(0.8))
model.add(Dense(2))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc', f1])
# Fit the model with early stopping callback
earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=3, verbose=0, mode='auto')
model.fit(xtrain_pad, y=ytrain_enc, batch_size=512, epochs=100,
verbose=1, validation_data=(xvalid_pad, yvalid_enc), callbacks=[earlystop])
model.save('app/model/test_model.hdf5')
Test method
from metrics import f1
...
model = load_model('app/model/test_model.hdf5', custom_objects={'f1':f1})
print(model.summary())
model.evaluate(xtest_pad, ytest_enc) # <-- error happens
Custom f1 metric
from keras import backend as K
def f1(y_true, y_pred):
def recall(y_true, y_pred):
"""Recall metric.
Only computes a batch-wise average of recall.
Computes the recall, a metric for multi-label classification of
how many relevant items are selected.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
"""Precision metric.
Only computes a batch-wise average of precision.
Computes the precision, a metric for multi-label classification of
how many selected items are relevant.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
precision = precision(y_true, y_pred)
recall = recall(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
EDIT
test
The preprocessed data used for evaluating the model
normalized_dataset = pd.read_pickle(DATA['preprocessed_test_path'])
lbl_enc = preprocessing.LabelEncoder()
y = lbl_enc.fit_transform(normalized_dataset.label.values)
xtest = normalized_dataset.preprocessed_tweets.values
ytest_enc = np_utils.to_categorical(y)
token = text.Tokenizer(num_words=None)
max_len = 70
token.fit_on_texts(list(xtest))
xtest_seq = token.texts_to_sequences(xtest)
xtest_pad = sequence.pad_sequences(xtest_seq, maxlen=max_len)
EDIT2
This is my full traceback that triggers the stated error
Traceback (most recent call last):
File "app/main.py", line 67, in <module>
main()
File "app/main.py", line 64, in main
test(embedding_dict)
File "/Users/justauser/Desktop/sentiment-analysis/app/test.py", line 50, in test
model.evaluate(xtest_pad, ytest_enc)
File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1389, in evaluate
tmp_logs = self.test_function(iterator)
File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
result = self._call(*args, **kwds)
File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 871, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 725, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2969, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3361, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3196, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 990, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 634, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 977, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1233 test_function *
return step_function(self, iterator)
/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1224 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
return fn(*args, **kwargs)
/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1219 run_step **
with ops.control_dependencies(_minimum_control_deps(outputs)):
/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:2793 _minimum_control_deps
outputs = nest.flatten(outputs, expand_composites=True)
/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/util/nest.py:341 flatten
return _pywrap_utils.Flatten(structure, expand_composites)
TypeError: '<' not supported between instances of 'function' and 'str'
The Python "TypeError: '<' not supported between instances of 'str' and 'int'" occurs when we use a comparison operator between values of type str and int . To solve the error, convert the string to an integer before comparing, e.g. int(my_str) < my_int .
The Python "TypeError: '>' not supported between instances of 'method' and 'int'" occurs when we use a comparison operator between a method and an integer. To solve the error, make sure to call the method with parenthesis, e.g. my_method() . Here is an example of how the error occurs. Copied!
To solve the error, you have to figure out where the None value comes from and correct the assignment or conditionally check if the variable doesn't store None . The most common sources of None values are: Having a function that doesn't return anything (returns None implicitly). Explicitly setting a variable to None .
After model.load()
if you compile your model again with the custom metric then it should work.
Therefore, after loading your model from disk using
model = load_model('app/model/test_model.hdf5', custom_objects={'f1':f1})
Make sure to compile it with the metrics of interest
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc', f1])
As @Zaccharie Ramzi pointed out, the accepted answer is not suitable if you want to load model to resume training, as the compiling step will reset the optimizer state (this is anyway fine if you want only to evaluate or test the model). If you want to load the model to resume training you can workaround this issue recompiling the loaded model using the output of the load:
model = load_model('app/model/test_model.hdf5', custom_objects={'f1':f1})
model.compile(loss=model.loss, optimizer=model.optimizer, metrics=['acc', f1])
As a reference see the github issue where this solution was posted first.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With