Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to add variables to progress bar in Keras?

I'd like to monitor eg. the learning rate during training in Keras both in the progress bar and in Tensorboard. I figure there must be a way to specify which variables are logged, but there's no immediate clarification on this issue on the Keras website.

I guess it's got something to do with creating a custom Callback function, however, it should be possible to modify the already existing progress bar callback, no?

like image 416
Neergaard Avatar asked Jan 11 '18 00:01

Neergaard


3 Answers

It can be achieved via a custom metric. Take the learning rate as an example:

def get_lr_metric(optimizer):
    def lr(y_true, y_pred):
        return optimizer.lr
    return lr

x = Input((50,))
out = Dense(1, activation='sigmoid')(x)
model = Model(x, out)

optimizer = Adam(lr=0.001)
lr_metric = get_lr_metric(optimizer)
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['acc', lr_metric])

# reducing the learning rate by half every 2 epochs
cbks = [LearningRateScheduler(lambda epoch: 0.001 * 0.5 ** (epoch // 2)),
        TensorBoard(write_graph=False)]
X = np.random.rand(1000, 50)
Y = np.random.randint(2, size=1000)
model.fit(X, Y, epochs=10, callbacks=cbks)

The LR will be printed in the progress bar:

Epoch 1/10
1000/1000 [==============================] - 0s 103us/step - loss: 0.8228 - acc: 0.4960 - lr: 0.0010
Epoch 2/10
1000/1000 [==============================] - 0s 61us/step - loss: 0.7305 - acc: 0.4970 - lr: 0.0010
Epoch 3/10
1000/1000 [==============================] - 0s 62us/step - loss: 0.7145 - acc: 0.4730 - lr: 5.0000e-04
Epoch 4/10
1000/1000 [==============================] - 0s 58us/step - loss: 0.7129 - acc: 0.4800 - lr: 5.0000e-04
Epoch 5/10
1000/1000 [==============================] - 0s 58us/step - loss: 0.7124 - acc: 0.4810 - lr: 2.5000e-04
Epoch 6/10
1000/1000 [==============================] - 0s 63us/step - loss: 0.7123 - acc: 0.4790 - lr: 2.5000e-04
Epoch 7/10
1000/1000 [==============================] - 0s 61us/step - loss: 0.7119 - acc: 0.4840 - lr: 1.2500e-04
Epoch 8/10
1000/1000 [==============================] - 0s 61us/step - loss: 0.7117 - acc: 0.4880 - lr: 1.2500e-04
Epoch 9/10
1000/1000 [==============================] - 0s 59us/step - loss: 0.7116 - acc: 0.4880 - lr: 6.2500e-05
Epoch 10/10
1000/1000 [==============================] - 0s 63us/step - loss: 0.7115 - acc: 0.4880 - lr: 6.2500e-05

Then, you can visualize the LR curve in TensorBoard.

enter image description here

like image 185
Yu-Yang Avatar answered Nov 16 '22 12:11

Yu-Yang


Another way (in fact encouraged one) of how to pass custom values to TensorBoard is by sublcassing the keras.callbacks.TensorBoard class. This allows you to apply custom functions to obtain desired metrics and pass them directly to TensorBoard.

Here is an example for learning rate of Adam optimizer:

class SubTensorBoard(TensorBoard):
    def __init__(self, *args, **kwargs):
        super(SubTensorBoard, self).__init__(*args, **kwargs)

    def lr_getter(self):
        # Get vals
        decay = self.model.optimizer.decay
        lr = self.model.optimizer.lr
        iters = self.model.optimizer.iterations # only this should not be const
        beta_1 = self.model.optimizer.beta_1
        beta_2 = self.model.optimizer.beta_2
        # calculate
        lr = lr * (1. / (1. + decay * K.cast(iters, K.dtype(decay))))
        t = K.cast(iters, K.floatx()) + 1
        lr_t = lr * (K.sqrt(1. - K.pow(beta_2, t)) / (1. - K.pow(beta_1, t)))
        return np.float32(K.eval(lr_t))

    def on_epoch_end(self, episode, logs = {}):
        logs.update({"lr": self.lr_getter()})
        super(SubTensorBoard, self).on_epoch_end(episode, logs)
like image 34
Martin Avatar answered Nov 16 '22 14:11

Martin


I've come to this question because I wanted to log more variables in the Keras progress bar. This is the way I did it after reading the answers here:

class UpdateMetricsCallback(tf.keras.callbacks.Callback):
  def on_batch_end(self, batch, logs):
    logs.update({'my_batch_metric' : 0.1, 'my_other_batch_metric': 0.2})
  def on_epoch_end(self, epoch, logs):
    logs.update({'my_epoch_metric' : 0.1, 'my_other_epoch_metric': 0.2})

model.fit(...,
  callbacks=[UpdateMetricsCallback()]
)

I hope it helps others.

like image 1
barbolo Avatar answered Nov 16 '22 14:11

barbolo