I am trying to add the r squared in the eval_metric_ops in my estimator like this:
def model_fn(features, labels, mode, params):
predict = prediction(features, params, mode)
loss = my_loss_fn
eval_metric_ops = {
'rsquared': tf.subtract(1.0, tf.div(tf.reduce_sum(tf.squared_difference(label, tf.reduce_sum(tf.squared_difference(labels, tf.reduce_mean(labels)))),
name = 'rsquared')
}
train_op = tf.contrib.layers.optimize_loss(
loss = loss,
global_step = global_step,
learning_rate = 0.1,
optimizer = "Adam"
)
predictions = {"predictions": predict}
return tf.estimator.EstimatorSpec(
mode = mode,
predictions = predictions,
loss = loss,
train_op = train_op,
eval_metric_ops = eval_metric_ops
)
but I have the following error:
TypeError: Values of eval_metric_ops must be (metric_value, update_op) tuples, given: Tensor("rsquared:0", shape=(), dtype=float32) for key: rsquared
I tried without the name argument too but does not change anything. Do you know how to create this eval_metric_ops ?
eval_metric_ops
needs a dict of metric results keyed by name. The values of the dict are the results of calling a metric function. The metric function in your case can be implemented using tf.metrics
:
def metric_fn(labels, predict):
SST, update_op1 = tf.metrics.mean_squared_error(labels, tf.reduce_mean(labels))
SSE, update_op2 = tf.metrics.mean_squared_error(labels, predictions )
return tf.subtract(1.0, tf.div(SSE, SST)), tf.group(update_op1, update_op2))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With