Below is my code:
model = Sequential([
Dense(32, input_shape=(32,), activation = 'relu'),
Dense(100, activation='relu'),
Dense(65, input_shape=(65,), activation='softmax')
])
model.summary()
model.compile(SGD(lr=.1), loss='binary_crossentropy', metrics=['accuracy'])
model.fit(train_samples, train_labels, batch_size=1000, epochs=1000,shuffle = True, verbose=2)
How will I set an adaptive learning rate of my model?
Constant learning rate. The constant learning rate is the default schedule in all Keras Optimizers. For example, in the SGD optimizer, the learning rate defaults to 0.01. To use a custom learning rate, simply instantiate an SGD optimizer and pass the argument learning_rate=0.01 .
In Keras, we can implement these adaptive learning algorithms easily using corresponding optimizers. It is usually recommended to leave the hyperparameters of these optimizers at their default values (except lr sometimes). Let us now look at the model performances using different adaptive learning rate methods.
Looking into the source code of Keras, the SGD optimizer takes decay and lr arguments and update the learning rate by a decreasing factor in each epoch. lr *= (1. / (1. + self.decay * self.iterations)) Momentum is another argument in SGD optimizer which we could tweak to obtain faster convergence.
Keras has a time-based learning rate schedule built in. The stochastic gradient descent optimization algorithm implementation in the SGD class has an argument called decay. This argument is used in the time-based learning rate decay schedule equation as follows: When the decay argument is zero (the default), this has no effect on the learning rate.
You don't need to recompile the model as the other answer suggested. Keras
comes with callbacks
which can be used for this task. More precisely, you can use LearningRateScheduler
callback and pass it some function that will adapt the learning rate based on the current epoch index.
Suppose that you want your learning rate to be some number times the epoch index (probably not the best idea but easy to comprehend)
def adapt_learning_rate(epoch):
return 0.001 * epoch
Now that we have our function we can create a learning scheduler that is responsible for calculating the learning rate at the beginning of each epoch.
my_lr_scheduler = keras.callbacks.LearningRateScheduler(adapt_learning_rate)
Last thing to do is to pass this callback to the fit
method.
model.fit(X, y, ..., callbacks=[my_lr_scheduler])
You may use a workaround.
For each_iteration in range(0, MaxEpoch):
Specify your own learning rate function that outputs a learning rate lr with respect to per epoch. The lr is then passed to your_optimiser
run model.compile(...optimizer=your_optimiser...)
run model.fit(...epochs = 1...)
After the ONE epoch, use model.save_weights(...)
Load weights by model.load_weights(...) for next iteration. See here for details https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model
In fact, #4 and #5 enables you to do a transfer learning
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With