I want to reduce learning rate in SGD optimizer of tensorflow2.0, I used this line of code:
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=self.parameter['learning_rate'],
decay_steps=(1000),
decay_rate=self.parameter['lr_decay']
)
opt = tf.keras.optimizers.SGD(learning_rate=lr_schedule, momentum=0.9)
But I don't know if my learning rate has dropped, how can I get my current learning rate?
The learning rate. Defaults to 0.01. float hyperparameter >= 0 that accelerates gradient descent in the relevant direction and dampens oscillations.
The constant learning rate is the default schedule in all Keras Optimizers. For example, in the SGD optimizer, the learning rate defaults to 0.01 . To use a custom learning rate, simply instantiate an SGD optimizer and pass the argument learning_rate=0.01 .
Although details about this optimizer are beyond the scope of this article, it's worth mentioning that Adam updates a learning rate separately for each model parameter/weight. This implies that with Adam, the learning rate may first increase at early layers, and thus help improve the efficiency of deep neural networks.
Step Decay A typical way is to to drop the learning rate by half every 10 epochs. To implement this in Keras, we can define a step decay function and use LearningRateScheduler callback to take the step decay function as argument and return the updated learning rates for use in SGD optimizer.
_decayed_lr method decays the learning_rate based on the number of iterations as and returns the actual learning rate at that specific iteration. It also casts the returned value to a type that you specify. So, the following code can do the job for you:
opt._decayed_lr(tf.float32)
@Lisanu's answer worked for me as well.
Here's why&how that answer works:
This tensorflow's github webpage shows the codes for tf.keras.optimizers
.
If you scroll down, there is a function named _decayed_lr
which allows users to get the decayed learning rate as a Tensor with dtype=var_dtype.
Therefore, by using optimizer._decayed_lr(tf.float32)
, we can get the current decayed learning rate.
If you'd like to print the current decayed learning rate during training in Tensorflow, you can define a custom-callback class and utilize optimizer._decayed_lr(tf.float32)
. The example is as follows:
class CustomCallback(tf.keras.callbacks.Callback):
def on_epoch_begin(self, epoch, logs=None):
current_decayed_lr = self.model.optimizer._decayed_lr(tf.float32).numpy()
print("current decayed lr: {:0.7f}".format(current_decayed_lr))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With