Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Get learning rate of keras model

I cannot seem to get the value of learning rate. What I get is below.

I've tried the model for 200 epochs and want to see/change the learning rate. Is this not the correct way?

>>> print(ig_cnn_model.optimizer.lr)
<tf.Variable 'lr_6:0' shape=() dtype=float32_ref>
like image 556
user14492 Avatar asked Apr 11 '18 22:04

user14492


People also ask

What is keras learning rate?

The constant learning rate is the default schedule in all Keras Optimizers. For example, in the SGD optimizer, the learning rate defaults to 0.01 . To use a custom learning rate, simply instantiate an SGD optimizer and pass the argument learning_rate=0.01 .

How do I use the learning rate scheduler in keras?

In new Keras API you can use more general version of schedule function which takes two arguments epoch and lr . From docs: schedule: a function that takes an epoch index as input (integer, indexed from 0) and current learning rate and returns a new learning rate as output (float).

How do I find the best learning rate in Tensorflow?

How to determine a good learning rate. You can identify a learning rate by looking at the TensorBoard graph of loss against training step. You want find the section where loss is decreasing fastest, and use the learning rate that was being used at that training step.

What is learningratescheduler in keras?

» Keras API reference / Callbacks API / LearningRateScheduler Learning rate scheduler. At the beginning of every epoch, this callback gets the updated learning rate value from schedule function provided at __init__, with the current epoch and current learning rate, and applies the updated learning rate on the optimizer.

How do I modify the learning rate of a keras optimizer?

You can use a learning rate schedule to modulate how the learning rate of your optimizer changes over time: Check out the learning rate schedule API documentation for a list of available schedules. These methods and attributes are common to all Keras optimizers. Apply gradients to variables. This is the second part of minimize ().

How does the time-based learning rate decay schedule work in keras?

Keras has a time-based learning rate schedule built in. The stochastic gradient descent optimization algorithm implementation in the SGD class has an argument called decay. This argument is used in the time-based learning rate decay schedule equation as follows: When the decay argument is zero (the default), this has no effect on the learning rate.

What is differential learning rates in keras?

The phrase ‘Differential Learning Rates’ implies the use of different learning rates on different parts of the network with lower learning rate in the initial layers and gradually increasing the learning rate in the later layers. In order to implement differential learning in Keras, we need to modify the optimizer source code.


4 Answers

Use eval() from keras.backend:

import keras.backend as K
from keras.models import Sequential
from keras.layers import Dense

model = Sequential()
model.add(Dense(1, input_shape=(1,)))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')

print(K.eval(model.optimizer.lr))

Output:

0.001
like image 120
Primusa Avatar answered Oct 21 '22 11:10

Primusa


The best way to get all information related to the optimizer would be with .get_config().

Example:

model.compile(optimizer=optimizerF,
                  loss=lossF,
                  metrics=['accuracy'])

model.optimizer.get_config()

>>> {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}

It returns a dict with all information.

like image 32
eXistanCe Avatar answered Oct 21 '22 12:10

eXistanCe


You can change your learning rate by

from keras.optimizers import Adam

model.compile(optimizer=Adam(lr=0.001), 
              loss='categorical_crossentropy', 
              metrics=['accuracy'])
like image 17
Mimi Cheng Avatar answered Oct 21 '22 11:10

Mimi Cheng


With Tensorflow >=2.0:

In [1]: import tensorflow as tf

In [2]: opt = tf.keras.optimizers.Adam()

In [3]: opt.lr.numpy()
Out[3]: 0.001

lr is just a tf.Variable, so its value can be changed via assign() method:

In [4]: opt.lr.assign(0.1)
Out[4]: <tf.Variable 'UnreadVariable' shape=() dtype=float32, numpy=0.1>

In [5]: opt.lr.numpy()
Out[5]: 0.1

Same goes with the rest of hyperparameters:

In [6]: opt.decay.numpy()
Out[6]: 0.0

In [7]: opt.beta_1.numpy()
Out[7]: 0.9

In [8]: opt.beta_2.numpy()
Out[8]: 0.999
like image 2
hoefling Avatar answered Oct 21 '22 10:10

hoefling