Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Tensorflow Keras RMSE metric returns different results than my own built RMSE loss function

This is a regression problem

My custom RMSE loss:

def root_mean_squared_error_loss(y_true, y_pred):
    return tf.keras.backend.sqrt(tf.keras.losses.MSE(y_true, y_pred))

Training code sample, where create_model returns a dense fully connected sequential model

from tensorflow.keras.metrics import RootMeanSquaredError
model = create_model()
model.compile(loss=root_mean_squared_error_loss, optimizer='adam', metrics=[RootMeanSquaredError()])

model.fit(train_.values,
          targets,
          validation_split=0.1,
          verbose=1,
          batch_size=32)
Train on 3478 samples, validate on 387 samples
Epoch 1/100
3478/3478 [==============================] - 2s 544us/sample - loss: 1.1983 - root_mean_squared_error: 0.7294 - val_loss: 0.7372 - val_root_mean_squared_error: 0.1274
Epoch 2/100
3478/3478 [==============================] - 1s 199us/sample - loss: 0.8371 - root_mean_squared_error: 0.3337 - val_loss: 0.7090 - val_root_mean_squared_error: 0.1288
Epoch 3/100
3478/3478 [==============================] - 1s 187us/sample - loss: 0.7336 - root_mean_squared_error: 0.2468 - val_loss: 0.6366 - val_root_mean_squared_error: 0.1062
Epoch 4/100
3478/3478 [==============================] - 1s 187us/sample - loss: 0.6668 - root_mean_squared_error: 0.2177 - val_loss: 0.5823 - val_root_mean_squared_error: 0.0818

I expected both loss and root_mean_squared_error to have same values, why is there a difference?

like image 760
ma7555 Avatar asked May 31 '20 11:05

ma7555


1 Answers

Two key differences, from source code:

  1. RMSE is a stateful metric (it keeps memory) - yours is stateless
  2. Square root is applied after taking a global mean, not before an axis=-1 mean like MSE does
    • As a result of 1, 2 is more involved: mean of a running quantity, total, is taken, with respect to another running quantity, count; both quantities are reset via RMSE.reset_states().

The raw formula fix is easy - but integrating statefulness will require work, as is beyond the scope of this question; refer to source code to see how it's done. A fix for 2 with a comparison, below.


import numpy as np
import tensorflow as tf
from tensorflow.keras.metrics import RootMeanSquaredError as RMSE

def root_mean_squared_error_loss(y_true, y_pred):
    return tf.sqrt(tf.reduce_mean(tf.math.squared_difference(y_true, y_pred)))

np.random.seed(0)

#%%###########################################################################
rmse = RMSE(dtype='float64')
rmsel = root_mean_squared_error_loss

x1 = np.random.randn(32, 10)
y1 = np.random.randn(32, 10)
x2 = np.random.randn(32, 10)
y2 = np.random.randn(32, 10)

#%%###########################################################################
print("TensorFlow RMSE:")
print(rmse(x1, y1))
print(rmse(x2, y2))
print("=" * 46)
print(rmse(x1, y1))
print(rmse(x2, y2))

print("\nMy RMSE:")
print(rmsel(x1, y1))
print(rmsel(x2, y2))
TensorFlow RMSE:
tf.Tensor(1.4132492562096124, shape=(), dtype=float64)
tf.Tensor(1.3875944990740972, shape=(), dtype=float64)
==============================================
tf.Tensor(1.3961984634354354, shape=(), dtype=float64)  # same inputs, different result
tf.Tensor(1.3875944990740972, shape=(), dtype=float64)  # same inputs, different result

My RMSE:
tf.Tensor(1.4132492562096124, shape=(), dtype=float64)  # first result agrees
tf.Tensor(1.3614563994283353, shape=(), dtype=float64)  # second differs since stateless
like image 65
OverLordGoldDragon Avatar answered Oct 23 '22 04:10

OverLordGoldDragon