I am using custom metrics when training my Keras model. It works fine, except that the metrics names in the output of model.fit_generator(...)
are not interpretable (NB: Tensorboard also uses these wrong names).
Here is a reproducible example of what I am doing: the metrics are using one parameter (in addition to the prediction and ground truth), so I defined a factory to generate a parameterless metric function, similar to this:
def my_dummy_metric(y_true, y_pred, the_param=1.0):
return the_param * keras.backend.ones((1))
def my_metric_factory(the_param=1.0):
def fn(y_true, y_pred):
return my_dummy_metric(y_true, y_pred, the_param=the_param)
return fn
my_second_metric = my_metric_factory(2.0)
my_other_metric = my_metric_factory(3.14)
Then I compile and train my model:
model.compile(my_optim, my_loss, [my_second_metric, my_other_metric])
history = model.fit_generator(...)
print(history.params['metrics'])
My trouble is that the metric names in history
are fn
, fn_1
, val_fn
and val_fn_1
. These names are also used by Tensorboard and you need to know the implem details to understand them.
On the contrary, I don't have this problem when using a simple custom function, without factory:
model.compile(my_optim, my_loss, [my_dummy_metric])
history = model.fit_generator(...)
print(history.params['metrics'])
Would it be possible to obtain my_XXX_metric
as output names also in the factory-based use case?
Environment: Using Keras 2.2.4, TF 1.14.0, Python 3.7
Yes, this is possible. In the metric factory, just set an appropriate __name__
of metric function. For example:
def my_metric_factory(the_param=1.0):
def fn(y_true, y_pred):
return my_dummy_metric(y_true, y_pred, the_param=the_param)
fn.__name__ = 'metricname_{}'.format(the_param)
return fn
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With