Optimisation Rules
Flux builds in many optimisation rules for use with train!
and other training functions.
The mechanism by which these work is gradually being replaced as part of the change from "implicit" dictionary-based to "explicit" tree-like structures. At present, the same struct (such as Adam
) can be used with either form, and will be automatically translated.
For full details of how the new interface works, see the Optimisers.jl documentation.
For full details on how the old "implicit" interface worked, see the Flux 0.13.6 manual.
Optimiser Reference
All optimisers return an object that, when passed to train!
, will update the parameters passed to it.
Flux.Optimise.Descent
— TypeDescent(η = 0.1)
Classic gradient descent optimiser with learning rate η
. For each parameter p
and its gradient δp
, this runs p -= η*δp
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights.
Examples
opt = Descent()
opt = Descent(0.3)
ps = Flux.params(model)
gs = gradient(ps) do
loss(x, y)
end
Flux.Optimise.update!(opt, ps, gs)
Flux.Optimise.Momentum
— TypeMomentum(η = 0.01, ρ = 0.9)
Gradient descent optimiser with learning rate η
and momentum ρ
.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Momentum (
ρ
): Controls the acceleration of gradient descent in the prominent direction, in effect damping oscillations.
Examples
opt = Momentum()
opt = Momentum(0.01, 0.99)
Flux.Optimise.Nesterov
— TypeNesterov(η = 0.001, ρ = 0.9)
Gradient descent optimiser with learning rate η
and Nesterov momentum ρ
.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Nesterov momentum (
ρ
): Controls the acceleration of gradient descent in the prominent direction, in effect damping oscillations.
Examples
opt = Nesterov()
opt = Nesterov(0.003, 0.95)
Flux.Optimise.RMSProp
— TypeRMSProp(η = 0.001, ρ = 0.9, ϵ = 1.0e-8)
Optimizer using the RMSProp algorithm. Often a good choice for recurrent networks. Parameters other than learning rate generally don't need tuning.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Momentum (
ρ
): Controls the acceleration of gradient descent in the prominent direction, in effect damping oscillations.
Examples
opt = RMSProp()
opt = RMSProp(0.002, 0.95)
Flux.Optimise.Adam
— TypeAdam(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)
Adam optimiser.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Decay of momentums (
β::Tuple
): Exponential decay for the first (β1) and the second (β2) momentum estimate.
Examples
opt = Adam()
opt = Adam(0.001, (0.9, 0.8))
Flux.Optimise.RAdam
— TypeRAdam(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)
Rectified Adam optimiser.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Decay of momentums (
β::Tuple
): Exponential decay for the first (β1) and the second (β2) momentum estimate.
Examples
opt = RAdam()
opt = RAdam(0.001, (0.9, 0.8))
Flux.Optimise.AdaMax
— TypeAdaMax(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)
AdaMax is a variant of Adam based on the ∞-norm.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Decay of momentums (
β::Tuple
): Exponential decay for the first (β1) and the second (β2) momentum estimate.
Examples
opt = AdaMax()
opt = AdaMax(0.001, (0.9, 0.995))
Flux.Optimise.AdaGrad
— TypeAdaGrad(η = 0.1, ϵ = 1.0e-8)
AdaGrad optimiser. It has parameter specific learning rates based on how frequently it is updated. Parameters don't need tuning.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights.
Examples
opt = AdaGrad()
opt = AdaGrad(0.001)
Flux.Optimise.AdaDelta
— TypeAdaDelta(ρ = 0.9, ϵ = 1.0e-8)
AdaDelta is a version of AdaGrad adapting its learning rate based on a window of past gradient updates. Parameters don't need tuning.
Parameters
- Rho (
ρ
): Factor by which the gradient is decayed at each time step.
Examples
opt = AdaDelta()
opt = AdaDelta(0.89)
Flux.Optimise.AMSGrad
— TypeAMSGrad(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)
The AMSGrad version of the Adam optimiser. Parameters don't need tuning.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Decay of momentums (
β::Tuple
): Exponential decay for the first (β1) and the second (β2) momentum estimate.
Examples
opt = AMSGrad()
opt = AMSGrad(0.001, (0.89, 0.995))
Flux.Optimise.NAdam
— TypeNAdam(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)
NAdam is a Nesterov variant of Adam. Parameters don't need tuning.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Decay of momentums (
β::Tuple
): Exponential decay for the first (β1) and the second (β2) momentum estimate.
Examples
opt = NAdam()
opt = NAdam(0.002, (0.89, 0.995))
Flux.Optimise.AdamW
— FunctionAdamW(η = 0.001, β::Tuple = (0.9, 0.999), decay = 0)
AdamW is a variant of Adam fixing (as in repairing) its weight decay regularization.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Decay of momentums (
β::Tuple
): Exponential decay for the first (β1) and the second (β2) momentum estimate. decay
: Decay applied to weights during optimisation.
Examples
opt = AdamW()
opt = AdamW(0.001, (0.89, 0.995), 0.1)
Flux.Optimise.OAdam
— TypeOAdam(η = 0.0001, β::Tuple = (0.5, 0.9), ϵ = 1.0e-8)
OAdam (Optimistic Adam) is a variant of Adam adding an "optimistic" term suitable for adversarial training.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Decay of momentums (
β::Tuple
): Exponential decay for the first (β1) and the second (β2) momentum estimate.
Examples
opt = OAdam()
opt = OAdam(0.001, (0.9, 0.995))
Flux.Optimise.AdaBelief
— TypeAdaBelief(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)
The AdaBelief optimiser is a variant of the well-known Adam optimiser.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Decay of momentums (
β::Tuple
): Exponential decay for the first (β1) and the second (β2) momentum estimate.
Examples
opt = AdaBelief()
opt = AdaBelief(0.001, (0.9, 0.8))
Composing Optimisers
Flux defines a special kind of optimiser simply called Optimiser
which takes in arbitrary optimisers as input. Its behaviour is similar to the usual optimisers, but differs in that it acts by calling the optimisers listed in it sequentially. Each optimiser produces a modified gradient that will be fed into the next, and the resultant update will be applied to the parameter as usual. A classic use case is where adding decays is desirable. Flux defines some basic decays including ExpDecay
, InvDecay
etc.
opt = Optimiser(ExpDecay(1, 0.1, 1000, 1e-4), Descent())
Here we apply exponential decay to the Descent
optimiser. The defaults of ExpDecay
say that its learning rate will be decayed every 1000 steps. It is then applied like any optimiser.
w = randn(10, 10)
w1 = randn(10,10)
ps = Params([w, w1])
loss(x) = Flux.Losses.mse(w * x, w1 * x)
loss(rand(10)) # around 9
for t = 1:10^5
θ = Params([w, w1])
θ̄ = gradient(() -> loss(rand(10)), θ)
Flux.Optimise.update!(opt, θ, θ̄)
end
loss(rand(10)) # around 0.9
It is possible to compose optimisers for some added flexibility.
Flux.Optimise.Optimiser
— TypeOptimiser(a, b, c...)
Combine several optimisers into one; each optimiser produces a modified gradient that will be fed into the next, and this is finally applied to the parameter as usual.
This will be replaced by Optimisers.OptimiserChain
in Flux 0.15.
Scheduling Optimisers
In practice, it is fairly common to schedule the learning rate of an optimiser to obtain faster convergence. There are a variety of popular scheduling policies, and you can find implementations of them in ParameterSchedulers.jl. The documentation for ParameterSchedulers.jl provides a more detailed overview of the different scheduling policies, and how to use them with Flux optimisers. Below, we provide a brief snippet illustrating a cosine annealing schedule with a momentum optimiser.
First, we import ParameterSchedulers.jl and initialize a cosine annealing schedule to vary the learning rate between 1e-4
and 1e-2
every 10 steps. We also create a new Momentum
optimiser.
using ParameterSchedulers
opt = Momentum()
schedule = Cos(λ0 = 1e-4, λ1 = 1e-2, period = 10)
for (eta, epoch) in zip(schedule, 1:100)
opt.eta = eta
# your training code here
end
schedule
can also be indexed (e.g. schedule(100)
) or iterated like any iterator in Julia.
ParameterSchedulers.jl schedules are stateless (they don't store their iteration state). If you want a stateful schedule, you can use ParameterSchedulers.Stateful
:
using ParameterSchedulers: Stateful, next!
schedule = Stateful(Cos(λ0 = 1e-4, λ1 = 1e-2, period = 10))
for epoch in 1:100
opt.eta = next!(schedule)
# your training code here
end
ParameterSchedulers.jl allows for many more scheduling policies including arbitrary functions, looping any function with a given period, or sequences of many schedules. See the ParameterSchedulers.jl documentation for more info.
Decays
Similar to optimisers, Flux also defines some simple decays that can be used in conjunction with other optimisers, or standalone.
Flux.Optimise.ExpDecay
— TypeExpDecay(η = 0.001, decay = 0.1, decay_step = 1000, clip = 1e-4, start = 1)
Discount the learning rate η
by the factor decay
every decay_step
steps till a minimum of clip
.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. decay
: Factor by which the learning rate is discounted.decay_step
: Schedule decay operations by setting the number of steps between two decay operations.clip
: Minimum value of learning rate.- 'start': Step at which the decay starts.
See also the Scheduling Optimisers section of the docs for more general scheduling techniques.
Examples
ExpDecay
is typically composed with other optimisers as the last transformation of the gradient:
opt = Optimiser(Adam(), ExpDecay(1.0))
Note: you may want to start with η=1
in ExpDecay
when combined with other optimisers (Adam
in this case) that have their own learning rate.
Flux.Optimise.InvDecay
— TypeInvDecay(γ = 0.001)
Apply inverse time decay to an optimiser, so that the effective step size at iteration n
is eta / (1 + γ * n)
where eta
is the initial step size. The wrapped optimiser's step size is not modified.
See also the Scheduling Optimisers section of the docs for more general scheduling techniques.
Examples
InvDecay
is typically composed with other optimisers as the last transformation of the gradient:
# Inverse decay of the learning rate
# with starting value 0.001 and decay coefficient 0.01.
opt = Optimiser(Adam(1f-3), InvDecay(1f-2))
Flux.Optimise.WeightDecay
— TypeWeightDecay(λ = 0)
Decay weights by $λ$. Typically composed with other optimisers as the first transformation to the gradient, making it equivalent to adding $L_2$ regularization with coefficient $λ$ to the loss.
Examples
opt = Optimiser(WeightDecay(1f-4), Adam())
Flux.Optimise.SignDecay
— TypeSignDecay(λ = 1e-3)
Version of WeightDecay
which implements $L_1$ regularisation, when composed with other optimisers as the first transformation to the gradient.
Examples
opt = Optimiser(SignDecay(1e-4), Adam())
Gradient Clipping
Gradient clipping is useful for training recurrent neural networks, which have a tendency to suffer from the exploding gradient problem. An example usage is
opt = Optimiser(ClipValue(1e-3), Adam(1e-3))
Flux.Optimise.ClipValue
— TypeClipValue(thresh)
Clip gradients when their absolute value exceeds thresh
.
This will be replaced by Optimisers.ClipGrad
in Flux 0.15.
Flux.Optimise.ClipNorm
— TypeClipNorm(thresh)
Clip gradients when their L2 norm exceeds thresh
.