Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to get current learning rate of SGD optimizer in TensorFlow 2.0 when I use tf.keras.optimizers.schedules.ExponentialDecay?

I want to reduce learning rate in SGD optimizer of tensorflow2.0, I used this line of code:

lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
            initial_learning_rate=self.parameter['learning_rate'],
            decay_steps=(1000), 
            decay_rate=self.parameter['lr_decay']
        )
opt = tf.keras.optimizers.SGD(learning_rate=lr_schedule, momentum=0.9)

But I don't know if my learning rate has dropped, how can I get my current learning rate?

like image 635
saleizl Avatar asked Nov 11 '19 07:11

saleizl


People also ask

What is the default learning rate in Tensorflow?

The learning rate. Defaults to 0.01. float hyperparameter >= 0 that accelerates gradient descent in the relevant direction and dampens oscillations.

How do I schedule a learning rate in keras?

The constant learning rate is the default schedule in all Keras Optimizers. For example, in the SGD optimizer, the learning rate defaults to 0.01 . To use a custom learning rate, simply instantiate an SGD optimizer and pass the argument learning_rate=0.01 .

Does Adam Optimizer change learning rate?

Although details about this optimizer are beyond the scope of this article, it's worth mentioning that Adam updates a learning rate separately for each model parameter/weight. This implies that with Adam, the learning rate may first increase at early layers, and thus help improve the efficiency of deep neural networks.

How do you use learning rate decay in keras?

Step Decay A typical way is to to drop the learning rate by half every 10 epochs. To implement this in Keras, we can define a step decay function and use LearningRateScheduler callback to take the step decay function as argument and return the updated learning rates for use in SGD optimizer.


2 Answers

_decayed_lr method decays the learning_rate based on the number of iterations as lr_decay and returns the actual learning rate at that specific iteration. It also casts the returned value to a type that you specify. So, the following code can do the job for you:

opt._decayed_lr(tf.float32)
like image 190
Lisanu Avatar answered Oct 09 '22 19:10

Lisanu


@Lisanu's answer worked for me as well.
Here's why&how that answer works:

This tensorflow's github webpage shows the codes for tf.keras.optimizers.
If you scroll down, there is a function named _decayed_lr which allows users to get the decayed learning rate as a Tensor with dtype=var_dtype.

Therefore, by using optimizer._decayed_lr(tf.float32), we can get the current decayed learning rate.

If you'd like to print the current decayed learning rate during training in Tensorflow, you can define a custom-callback class and utilize optimizer._decayed_lr(tf.float32). The example is as follows:

class CustomCallback(tf.keras.callbacks.Callback):
    def on_epoch_begin(self, epoch, logs=None):
        current_decayed_lr = self.model.optimizer._decayed_lr(tf.float32).numpy()
        print("current decayed lr: {:0.7f}".format(current_decayed_lr))
like image 3
Dane Lee Avatar answered Oct 09 '22 20:10

Dane Lee