While training, I'd like to know the value of learning_rate. What should I do?
It's my code, like this:
my_optimizer = torch.optim.SGD(my_model.parameters(),
lr=0.001,
momentum=0.99,
weight_decay=2e-3)
Thank you.
There are multiple ways to select a good starting point for the learning rate. A naive approach is to try a few different values and see which one gives you the best loss without sacrificing speed of training. We might start with a large value like 0.1, then try exponentially lower values: 0.01, 0.001, etc.
LAMBDA LRSets the learning rate of each parameter group to the initial lr times a given function.
Adam optimizer is an adoptive learning rate optimizer that is very popular for deep learning, especially in computer vision. I have seen some papers that after specific epochs, for example, 50 epochs, they decrease its learning rate by dividing it by 10.
For only one parameter group like in the example you've given, you can use this function and call it during training to get the current learning rate:
def get_lr(optimizer):
for param_group in optimizer.param_groups:
return param_group['lr']
Alternatively, you may use an lr_scheduler
along with your optimizer and simply call the built-in lr_scheduler.get_lr()
method.
Here is an example:
my_optimizer = torch.optim.Adam( my_model.parameters(),
lr = 0.001,
weight_decay = 0.002)
my_lr_scheduler = torch.optim.lr_scheduler.StepLR( my_optimizer,
step_size = 50,
gamma = 0.1)
# train
...
my_optimizer.step()
my_lr_scheduler.step()
# get learning rate
my_lr = my_lr_scheduler.get_lr()
# or
my_lr = my_lr_scheduler.optimizer.param_groups[0]['lr']
The added benefit for using lr_scheduler
is more controls on changing lr over time; lr_decay, etc.
For lr_scheduler args, refer to pytorch docs.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With