I am really new to Keras so forgive me if my query is a bit silly. I installed Keras in my system using the default methods and it works fine. I want to add a new optimizer to Keras so that I can easily mention "optimizer = mynewone " under the model.compile function. How do I go about changing the " optimizer.py " code in Keras and ensuring that the change is reflected on my Keras environment. Here is what I tried:
Suppose I change the optimizer name from rmsprop to rmsprops in the code I get the following error:
model.compile(loss='binary_crossentropy', optimizer='rmsprops', metrics= ['accuracy'])
Traceback (most recent call last):
File "<ipython-input-33-40773d534448>", line 1, in <module>
model.compile(loss='binary_crossentropy', optimizer='rmsprops', metrics=['accuracy'])
File "/home/kiran/anaconda/lib/python3.5/site-packages/keras/models.py", line 589, in compile
**kwargs)
File "/home/kiran/anaconda/lib/python3.5/site-packages/keras/engine/training.py", line 469, in compile
self.optimizer = optimizers.get(optimizer)
File "/home/kiran/anaconda/lib/python3.5/site-packages/keras/optimizers.py", line 614, in get
# Instantiate a Keras optimizer
File "/home/kiran/anaconda/lib/python3.5/site-packages/keras/utils/generic_utils.py", line 16, in get_from_module
str(identifier))
ValueError: Invalid optimizer: rmsprops
Then when I click on optimizers.py I get the code developed by Keras in my environment. After that in the code I replaced all "rmsprop" keywords with "rmsprops" and saved the file. So I thought I must have the updated optimizers.py in my system. But when I go back to my original file and run model.compile it throws the same error.
Any help would be really appreciated. Thanks in advance.
The constant learning rate is the default schedule in all Keras Optimizers. For example, in the SGD optimizer, the learning rate defaults to 0.01 . To use a custom learning rate, simply instantiate an SGD optimizer and pass the argument learning_rate=0.01 .
Optimizer that implements the RMSprop algorithm. The gist of RMSprop is to: Maintain a moving (discounted) average of the square of gradients. Divide the gradient by the root of this average.
I think your approach is complicated and it doesn't have to be. Let's say you implement your own optimizer by subclassing keras.optimizers.Optimizer:
class MyOptimizer(Optimizer):
optimizer functions here.
Then to instantiate it in your model you can do this:
myOpt = MyOptimizer()
model.compile(loss='binary_crossentropy', optimizer=myOpt, metrics= ['accuracy'])
Just pass an instance of your optimizer as the optimizer parameter of model.compile and that's it, Keras will now use your optimizer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With