Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

PyTorch: How to change the learning rate of an optimizer at any given moment (no LR schedule)

Is it possible in PyTorch to change the learning rate of the optimizer in the middle of training dynamically (I don't want to define a learning rate schedule beforehand)?

So let's say I have an optimizer:

optim = torch.optim.SGD(model.parameters(), lr=0.01) 

Now due to some tests which I perform during training, I realize my learning rate is too high so I want to change it to say 0.001. There doesn't seem to be a method optim.set_lr(0.001) but is there some way to do this?

like image 365
patapouf_ai Avatar asked Jan 18 '18 14:01

patapouf_ai


People also ask

How do I update my learning rate?

The mathematical form of time-based decay is lr = lr0/(1+kt) where lr , k are hyperparameters and t is the iteration number. Looking into the source code of Keras, the SGD optimizer takes decay and lr arguments and update the learning rate by a decreasing factor in each epoch.

What is adaptive learning rate?

Adaptive learning rate methods are an optimization of gradient descent methods with the goal of minimizing the objective function of a network by using the gradient of the function and the parameters of the network.


2 Answers

So the learning rate is stored in optim.param_groups[i]['lr']. optim.param_groups is a list of the different weight groups which can have different learning rates. Thus, simply doing:

for g in optim.param_groups:     g['lr'] = 0.001 

will do the trick.


Alternatively,

as mentionned in the comments, if your learning rate only depends on the epoch number, you can use a learning rate scheduler.

For example (modified example from the doc):

torch.optim.lr_scheduler import LambdaLR optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) # Assuming optimizer has two groups. lambda_group1 = lambda epoch: epoch // 30 lambda_group2 = lambda epoch: 0.95 ** epoch scheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2]) for epoch in range(100):     train(...)     validate(...)     scheduler.step() 

Also, there is a prebuilt learning rate scheduler to reduce on plateaus.

like image 125
patapouf_ai Avatar answered Sep 22 '22 11:09

patapouf_ai


Instead of a loop in patapouf_ai's answer, you can do it directly via:

optim.param_groups[0]['lr'] = 0.001 
like image 39
Michael Avatar answered Sep 23 '22 11:09

Michael