Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Keras: how to output learning rate onto tensorboard

I added a callback to decay the learning rate:

 keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=100, 
                                   verbose=0, mode='auto',epsilon=0.00002, cooldown=20, min_lr=0)

Here is my tensorboard callback:

keras.callbacks.TensorBoard(log_dir='./graph/rank{}'.format(hvd.rank()), histogram_freq=10, batch_size=FLAGS.batch_size,
                            write_graph=True, write_grads=True, write_images=False)

I want to make sure the learning rate scheduler has kicked in during training, so I want to output the learning rate onto tensorboard. But I can not find where I can set it.

I also checked the optimizer api, but no luck.

keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)

How can I output the learning rate to tensorboad?

like image 803
scott huang Avatar asked Mar 06 '18 09:03

scott huang


People also ask

How do I add a learning rate in Keras?

The constant learning rate is the default schedule in all Keras Optimizers. For example, in the SGD optimizer, the learning rate defaults to 0.01 . To use a custom learning rate, simply instantiate an SGD optimizer and pass the argument learning_rate=0.01 .

How do I use the learning rate scheduler in Keras?

In new Keras API you can use more general version of schedule function which takes two arguments epoch and lr . From docs: schedule: a function that takes an epoch index as input (integer, indexed from 0) and current learning rate and returns a new learning rate as output (float).

How do you decay learning rate in Keras?

Step Decay A typical way is to to drop the learning rate by half every 10 epochs. To implement this in Keras, we can define a step decay function and use LearningRateScheduler callback to take the step decay function as argument and return the updated learning rates for use in SGD optimizer.


3 Answers

According to the author of Keras, the proper way is to subclass the TensorBoard callback:

from keras import backend as K
from keras.callbacks import TensorBoard

class LRTensorBoard(TensorBoard):
    # add other arguments to __init__ if you need
    def __init__(self, log_dir, **kwargs):
        super().__init__(log_dir=log_dir, **kwargs)

    def on_epoch_end(self, epoch, logs=None):
        logs = logs or {}
        logs.update({'lr': K.eval(self.model.optimizer.lr)})
        super().on_epoch_end(epoch, logs)

Then pass it as part of the callbacks argument to model.fit (credit Finncent Price):

model.fit(x=..., y=..., callbacks=[LRTensorBoard(log_dir="/tmp/tb_log")])
like image 192
alkamid Avatar answered Oct 22 '22 03:10

alkamid


Note that with the current nightly version of tf (2.5 - probably earlier) learning rates using LearningRateSchedule are automatically added to tensorboard's logs. The following solution is only necessary if you're adapting the learning rate some other way - e.g. via ReduceLROnPlateau or LearningRateScheduler (different to LearningRateSchedule) callbacks.

While extending tf.keras.callbacks.TensorBoard is a viable option, I prefer composition over subclassing.

class LearningRateLogger(tf.keras.callbacks.Callback):
    def __init__(self):
        super().__init__()
        self._supports_tf_logs = True

    def on_epoch_end(self, epoch, logs=None):
        if logs is None or "learning_rate" in logs:
            return
        logs["learning_rate"] = self.model.optimizer.lr

This allows us to compose multiple similar callbacks, and use the logged learning rate in multiple other callbacks (e.g. if you add a CSVLogger it should also write the learning rate values to file).

Then in model.fit

model.fit(
    callbacks=[
        LearningRateLogger(),
        # other callbacks that update `logs`
        tf.keras.callbacks.TensorBoard(path),
        # other callbacks that use updated logs, e.g. CSVLogger
    ],
    **kwargs
)
like image 25
DomJack Avatar answered Oct 22 '22 01:10

DomJack


You gave the optimizer's code twice, instead of TensorBoard Callback. Anyway, I didn`t find the way to display the learning rate on TensorBoard. I am plotting it after the training finished, taking data from History object:

nb_epoch = len(history1.history['loss'])
learning_rate=history1.history['lr']
xc=range(nb_epoch)
plt.figure(3,figsize=(7,5))
plt.plot(xc,learning_rate)
plt.xlabel('num of Epochs')
plt.ylabel('learning rate')
plt.title('Learning rate')
plt.grid(True)
plt.style.use(['seaborn-ticks'])

The chart looks like this: LR plot

Sorry, that is not exactly what you are asking about, but perhaps could help.

like image 43
Aleksandr Chernov Avatar answered Oct 22 '22 03:10

Aleksandr Chernov