Optimisation Rules
Any optimization rule from Optimisers.jl can be used with train!
and other training functions.
For full details of how the new interface works, see the Optimisers.jl documentation.
Optimisers Reference
All optimisers return an object that, when passed to train!
, will update the parameters passed to it.
Optimisers.Descent
— TypeDescent(η = 1f-1)
Descent(; eta)
Classic gradient descent optimiser with learning rate η
. For each parameter p
and its gradient dp
, this runs p -= η*dp
.
Parameters
- Learning rate (
η == eta
): Amount by which gradients are discounted before updating the weights.
Optimisers.Momentum
— TypeMomentum(η = 0.01, ρ = 0.9)
Momentum(; [eta, rho])
Gradient descent optimizer with learning rate η
and momentum ρ
.
Parameters
- Learning rate (
η == eta
): Amount by which gradients are discounted before updating the weights. - Momentum (
ρ == rho
): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.
Optimisers.Nesterov
— TypeNesterov(η = 0.001, ρ = 0.9)
Gradient descent optimizer with learning rate η
and Nesterov momentum ρ
.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Nesterov momentum (
ρ
): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations.
Optimisers.RMSProp
— TypeRMSProp(η = 0.001, ρ = 0.9, ϵ = 1e-8; centred = false)
RMSProp(; [eta, rho, epsilon, centred])
Optimizer using the RMSProp algorithm. Often a good choice for recurrent networks. Parameters other than learning rate generally don't need tuning.
Centred RMSProp is a variant which normalises gradients by an estimate their variance, instead of their second moment.
Parameters
- Learning rate (
η == eta
): Amount by which gradients are discounted before updating the weights. - Momentum (
ρ == rho
): Controls the acceleration of gradient descent in the prominent direction, in effect dampening oscillations. - Machine epsilon (
ϵ == epsilon
): Constant to prevent division by zero (no need to change default) - Keyword
centred
(orcentered
): Indicates whether to use centred variant of the algorithm.
Optimisers.Adam
— TypeAdam(η = 0.001, β = (0.9, 0.999), ϵ = 1e-8)
Adam optimiser.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Decay of momentums (
β::Tuple
): Exponential decay for the first (β1) and the second (β2) momentum estimate. - Machine epsilon (
ϵ
): Constant to prevent division by zero (no need to change default)
Optimisers.RAdam
— TypeRAdam(η = 0.001, β = (0.9, 0.999), ϵ = 1e-8)
Rectified Adam optimizer.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Decay of momentums (
β::Tuple
): Exponential decay for the first (β1) and the second (β2) momentum estimate. - Machine epsilon (
ϵ
): Constant to prevent division by zero (no need to change default)
Optimisers.AdaMax
— TypeAdaMax(η = 0.001, β = (0.9, 0.999), ϵ = 1e-8)
AdaMax is a variant of Adam based on the ∞-norm.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Decay of momentums (
β::Tuple
): Exponential decay for the first (β1) and the second (β2) momentum estimate. - Machine epsilon (
ϵ
): Constant to prevent division by zero (no need to change default)
Optimisers.AdaGrad
— TypeAdaGrad(η = 0.1, ϵ = 1e-8)
AdaGrad optimizer. It has parameter specific learning rates based on how frequently it is updated. Parameters don't need tuning.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Machine epsilon (
ϵ
): Constant to prevent division by zero (no need to change default)
Optimisers.AdaDelta
— TypeAdaDelta(ρ = 0.9, ϵ = 1e-8)
AdaDelta is a version of AdaGrad adapting its learning rate based on a window of past gradient updates. Parameters don't need tuning.
Parameters
- Rho (
ρ
): Factor by which the gradient is decayed at each time step. - Machine epsilon (
ϵ
): Constant to prevent division by zero (no need to change default)
Optimisers.AMSGrad
— TypeAMSGrad(η = 0.001, β = (0.9, 0.999), ϵ = 1e-8)
The AMSGrad version of the Adam optimiser. Parameters don't need tuning.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Decay of momentums (
β::Tuple
): Exponential decay for the first (β1) and the second (β2) momentum estimate. - Machine epsilon (
ϵ
): Constant to prevent division by zero (no need to change default)
Optimisers.NAdam
— TypeNAdam(η = 0.001, β = (0.9, 0.999), ϵ = 1e-8)
NAdam is a Nesterov variant of Adam. Parameters don't need tuning.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Decay of momentums (
β::Tuple
): Exponential decay for the first (β1) and the second (β2) momentum estimate. - Machine epsilon (
ϵ
): Constant to prevent division by zero (no need to change default)
Optimisers.AdamW
— FunctionAdamW(η = 0.001, β = (0.9, 0.999), λ = 0, ϵ = 1e-8)
AdamW(; [eta, beta, lambda, epsilon])
AdamW is a variant of Adam fixing (as in repairing) its weight decay regularization. Implemented as an OptimiserChain
of Adam
and WeightDecay
`.
Parameters
- Learning rate (
η == eta
): Amount by which gradients are discounted before updating the weights. - Decay of momentums (
β::Tuple == beta
): Exponential decay for the first (β1) and the second (β2) momentum estimate. - Weight decay (
λ == lambda
): Controls the strength of $L_2$ regularisation. - Machine epsilon (
ϵ == epsilon
): Constant to prevent division by zero (no need to change default)
Optimisers.OAdam
— TypeOAdam(η = 0.001, β = (0.5, 0.9), ϵ = 1e-8)
OAdam (Optimistic Adam) is a variant of Adam adding an "optimistic" term suitable for adversarial training.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Decay of momentums (
β::Tuple
): Exponential decay for the first (β1) and the second (β2) momentum estimate. - Machine epsilon (
ϵ
): Constant to prevent division by zero (no need to change default)
Optimisers.AdaBelief
— TypeAdaBelief(η = 0.001, β = (0.9, 0.999), ϵ = 1e-16)
The AdaBelief optimiser is a variant of the well-known Adam optimiser.
Parameters
- Learning rate (
η
): Amount by which gradients are discounted before updating the weights. - Decay of momentums (
β::Tuple
): Exponential decay for the first (β1) and the second (β2) momentum estimate. - Machine epsilon (
ϵ::Float32
): Constant to prevent division by zero (no need to change default)
Optimisers.Lion
— TypeLion(η = 0.001, β = (0.9, 0.999))
Lion optimiser.
Parameters
- Learning rate (
η
): Magnitude by which gradients are updating the weights. - Decay of momentums (
β::Tuple
): Exponential decay for the first (β1) and the second (β2) momentum estimate.
Composing Optimisers
Flux (through Optimisers.jl) defines a special kind of optimiser called OptimiserChain
which takes in arbitrary optimisers as input. Its behaviour is similar to the usual optimisers, but differs in that it acts by calling the optimisers listed in it sequentially. Each optimiser produces a modified gradient that will be fed into the next, and the resultant update will be applied to the parameter as usual. A classic use case is where adding decays is desirable. Optimisers.jl defines the basic decay corresponding to an $L_2$ regularization in the loss as WeightDecay
.
opt = OptimiserChain(WeightDecay(1e-4), Descent())
Here we apply the weight decay to the Descent
optimiser. The resulting optimiser opt
can be used as any optimiser.
w = [randn(10, 10), randn(10, 10)]
opt_state = Flux.setup(opt, w)
loss(w, x) = Flux.mse(w[1] * x, w[2] * x)
loss(w, rand(10)) # around 0.9
for t = 1:10^5
g = gradient(w -> loss(w[1], w[2], rand(10)), w)
Flux.update!(opt_state, w, g)
end
loss(w, rand(10)) # around 0.9
It is possible to compose optimisers for some added flexibility.
Optimisers.OptimiserChain
— TypeOptimiserChain(opts...)
Compose a sequence of optimisers so that each opt
in opts
updates the gradient, in the order specified.
With an empty sequence, OptimiserChain()
is the identity, so update!
will subtract the full gradient from the parameters. This is equivalent to Descent(1)
.
Example
julia> o = OptimiserChain(ClipGrad(1.0), Descent(0.1));
julia> m = (zeros(3),);
julia> s = Optimisers.setup(o, m)
(Leaf(OptimiserChain(ClipGrad(1.0), Descent(0.1)), (nothing, nothing)),)
julia> Optimisers.update(s, m, ([0.3, 1, 7],))[2] # clips before discounting
([-0.03, -0.1, -0.1],)
Scheduling Optimisers
In practice, it is fairly common to schedule the learning rate of an optimiser to obtain faster convergence. There are a variety of popular scheduling policies, and you can find implementations of them in ParameterSchedulers.jl. The documentation for ParameterSchedulers.jl provides a more detailed overview of the different scheduling policies, and how to use them with Flux optimisers. Below, we provide a brief snippet illustrating a cosine annealing schedule with a momentum optimiser.
First, we import ParameterSchedulers.jl and initialize a cosine annealing schedule to vary the learning rate between 1e-4
and 1e-2
every 10 steps. We also create a new Momentum
optimiser.
using ParameterSchedulers
opt = Momentum()
schedule = Cos(λ0 = 1e-4, λ1 = 1e-2, period = 10)
for (eta, epoch) in zip(schedule, 1:100)
opt.eta = eta
# your training code here
end
schedule
can also be indexed (e.g. schedule(100)
) or iterated like any iterator in Julia.
ParameterSchedulers.jl schedules are stateless (they don't store their iteration state). If you want a stateful schedule, you can use ParameterSchedulers.Stateful
:
using ParameterSchedulers: Stateful, next!
schedule = Stateful(Cos(λ0 = 1e-4, λ1 = 1e-2, period = 10))
for epoch in 1:100
opt.eta = next!(schedule)
# your training code here
end
ParameterSchedulers.jl allows for many more scheduling policies including arbitrary functions, looping any function with a given period, or sequences of many schedules. See the ParameterSchedulers.jl documentation for more info.
Decays
Similar to optimisers, Flux also defines some simple decays that can be used in conjunction with other optimisers, or standalone.
Optimisers.SignDecay
— TypeSignDecay(λ = 1e-3)
Implements $L_1$ regularisation, also known as LASSO regression, when composed with other rules as the first transformation in an OptimiserChain
.
It does this by adding λ .* sign(x)
to the gradient. This is equivalent to adding λ * sum(abs, x) == λ * norm(x, 1)
to the loss.
See also [WeightDecay
] for $L_2$ normalisation. They can be used together: OptimiserChain(SignDecay(0.012), WeightDecay(0.034), Adam())
is equivalent to adding 0.012 * norm(x, 1) + 0.017 * norm(x, 2)^2
to the loss function.
Parameters
- Penalty (
λ ≥ 0
): Controls the strength of the regularisation.
Optimisers.WeightDecay
— TypeWeightDecay(λ = 5e-4)
Implements $L_2$ regularisation, also known as ridge regression, when composed with other rules as the first transformation in an OptimiserChain
.
It does this by adding λ .* x
to the gradient. This is equivalent to adding λ/2 * sum(abs2, x) == λ/2 * norm(x)^2
to the loss.
See also [SignDecay
] for $L_1$ normalisation.
Parameters
- Penalty (
λ ≥ 0
): Controls the strength of the regularisation.
Gradient Clipping
Gradient clipping is useful for training recurrent neural networks, which have a tendency to suffer from the exploding gradient problem. An example usage is
opt = OptimiserChain(ClipValue(1e-3), Adam(1e-3))
Optimisers.ClipGrad
— TypeClipGrad(δ = 10)
Restricts every gradient component to obey -δ ≤ dx[i] ≤ δ
.
Typically composed with other rules using OptimiserChain
.
See also ClipNorm
.
Optimisers.ClipNorm
— TypeClipNorm(ω = 10, p = 2; throw = true)
Scales any gradient array for which norm(dx, p) > ω
to stay at this threshold (unless p==0
).
Throws an error if the norm is infinite or NaN
, which you can turn off with throw = false
.
Typically composed with other rules using OptimiserChain
.
See also ClipGrad
.