I want to use the SGD
optimiser in Flux as shown in the Julia Academy tutorial for Deep Learning with Flux.jl. This the notebook they provided in which they use the SGD optimizer as:
opt = SGD(params(model))
However when I run SGD I get:
ERROR: UndefVarError: SGD not defined
This is my output when I run ?SGD
:
search: SGD AMSGrad Signed signed Unsigned unsigned sigmoid issetgid logsigmoid StringIndexError isassigned significand
Couldn't find SGD
Perhaps you meant SGD, Set, Sys, GC, Some, sec, sin, sum, LSTM, csc, esc, isa, ans, abs, cis, cos, eps, ARGS, Pkg, GRU, RNN, cpu, elu, f32, f64, gpu, σ, !, !=, !== or %
No documentation found.
Binding SGD does not exist.
As you can see it is still showing SGD in the Perhaps you meant line.
I do not get an error when I run other optimizers also shown in the tutorial such as ADAM. I am using Flux v0.10.0
The tutorial uses an outdated version of Flux
.
In version v0.10.0 of Flux, Flux has deprecated usage of SGD
in favor of Descent
which is just a more optimised version of the Standard Gradient Descent algorithm.
More information on the Descent
optimizer can be found in the documentation.
Also as a side-note, Flux no longer needs to pass params(model)
into the optimizer instead it takes it as a separate argument when training.
# New Way
Flux.train!(loss, params(model), data, optimizer)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With