I have a net in Tensorflow and I am trying to reimplement it in Keras. Currently compared to the Tensorflow model the Keras model completly underperforms. The loss is much higher and decreases slower compared to the original model. My best guess is that I am using the wrong Optimizer. In the Tensorflow code the Optimizer looks like this:
global_step = tf.Variable(0, trainable=False)
learning_rate = tf.train.exponential_decay(0.0001,
global_step,
decay_steps=10000,
decay_rate=0.33,
staircase=True)
optimizer = tf.train.AdamOptimizer(learning_rate, epsilon=1e-8)
train_op = optimizer.minimize(total_loss, global_step)
In Keras it looks like this:
adam = keras.optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-8)
model.compile(loss=get_loss_funcs(), optimizer=adam)
Is there a way to implement the Tensorflow optimizer in Keras?
Adam optimization is a stochastic gradient descent method that is based on adaptive estimation of first-order and second-order moments.
Optimizers are the extended class, which include added information to train a specific model. The optimizer class is initialized with given parameters but it is important to remember that no Tensor is needed. The optimizers are used for improving speed and performance for training a specific model.
Adam optimizer is the extended version of stochastic gradient descent which could be implemented in various deep learning applications such as computer vision and natural language processing in the future years. Adam was first introduced in 2014.
yes there is! - TFOptimizer
class TFOptimizer(Optimizer):
"""Wrapper class for native TensorFlow optimizers.
"""
it's called like this:
keras.optimizers.TFOptimizer(optimizer)
the wrapp will help you see if the issue is due to the optimiser.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With