Optimisers

Consider a simple linear regression. We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters W and b.

using Flux

W = rand(2, 5)
b = rand(2)

predict(x) = (W * x) .+ b
loss(x, y) = sum((predict(x) .- y).^2)

x, y = rand(5), rand(2) # Dummy data
l = loss(x, y) # ~ 3

θ = Flux.params(W, b)
grads = gradient(() -> loss(x, y), θ)

We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:

η = 0.1 # Learning Rate
for p in (W, b)
  p .-= η * grads[p]
end

Running this will alter the parameters W and b and our loss should go down. Flux provides a more general way to do optimiser updates like this.

using Flux: update!

opt = Descent(0.1) # Gradient descent with learning rate 0.1

for p in (W, b)
  update!(opt, p, grads[p])
end

An optimiser update! accepts a parameter and a gradient, and updates the parameter according to the chosen rule. We can also pass opt to our training loop, which will update all parameters of the model in a loop. However, we can now easily replace Descent with a more advanced optimiser such as Adam.

Optimiser Reference

All optimisers return an object that, when passed to train!, will update the parameters passed to it.

Flux.Optimise.update!Function
update!(opt, p, g)
update!(opt, ps::Params, gs)

Perform an update step of the parameters ps (or the single parameter p) according to optimizer opt and the gradients gs (the gradient g).

As a result, the parameters are mutated and the optimizer's internal state may change. The gradient could be mutated as well.

source
Flux.Optimise.DescentType
Descent(η = 0.1)

Classic gradient descent optimiser with learning rate η. For each parameter p and its gradient δp, this runs p -= η*δp

Parameters

  • Learning rate (η): Amount by which gradients are discounted before updating the weights.

Examples

opt = Descent()

opt = Descent(0.3)

ps = Flux.params(model)

gs = gradient(ps) do
    loss(x, y)
end

Flux.Optimise.update!(opt, ps, gs)
source
Flux.Optimise.MomentumType
Momentum(η = 0.01, ρ = 0.9)

Gradient descent optimizer with learning rate η and momentum ρ.

Parameters

  • Learning rate (η): Amount by which gradients are discounted before updating the weights.
  • Momentum (ρ): Controls the acceleration of gradient descent in the prominent direction, in effect damping oscillations.

Examples

opt = Momentum()

opt = Momentum(0.01, 0.99)
source
Flux.Optimise.NesterovType
Nesterov(η = 0.001, ρ = 0.9)

Gradient descent optimizer with learning rate η and Nesterov momentum ρ.

Parameters

  • Learning rate (η): Amount by which gradients are discounted before updating the weights.
  • Nesterov momentum (ρ): Controls the acceleration of gradient descent in the prominent direction, in effect damping oscillations.

Examples

opt = Nesterov()

opt = Nesterov(0.003, 0.95)
source
Flux.Optimise.RMSPropType
RMSProp(η = 0.001, ρ = 0.9, ϵ = 1.0e-8)

Optimizer using the RMSProp algorithm. Often a good choice for recurrent networks. Parameters other than learning rate generally don't need tuning.

Parameters

  • Learning rate (η): Amount by which gradients are discounted before updating the weights.
  • Momentum (ρ): Controls the acceleration of gradient descent in the prominent direction, in effect damping oscillations.

Examples

opt = RMSProp()

opt = RMSProp(0.002, 0.95)
source
Flux.Optimise.AdamType
Adam(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

Adam optimiser.

Parameters

  • Learning rate (η): Amount by which gradients are discounted before updating the weights.
  • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

Examples

opt = Adam()

opt = Adam(0.001, (0.9, 0.8))
source
Flux.Optimise.RAdamType
RAdam(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

Rectified Adam optimizer.

Parameters

  • Learning rate (η): Amount by which gradients are discounted before updating the weights.
  • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

Examples

opt = RAdam()

opt = RAdam(0.001, (0.9, 0.8))
source
Flux.Optimise.AdaMaxType
AdaMax(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

AdaMax is a variant of Adam based on the ∞-norm.

Parameters

  • Learning rate (η): Amount by which gradients are discounted before updating the weights.
  • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

Examples

opt = AdaMax()

opt = AdaMax(0.001, (0.9, 0.995))
source
Flux.Optimise.AdaGradType
AdaGrad(η = 0.1, ϵ = 1.0e-8)

AdaGrad optimizer. It has parameter specific learning rates based on how frequently it is updated. Parameters don't need tuning.

Parameters

  • Learning rate (η): Amount by which gradients are discounted before updating the weights.

Examples

opt = AdaGrad()

opt = AdaGrad(0.001)
source
Flux.Optimise.AdaDeltaType
AdaDelta(ρ = 0.9, ϵ = 1.0e-8)

AdaDelta is a version of AdaGrad adapting its learning rate based on a window of past gradient updates. Parameters don't need tuning.

Parameters

  • Rho (ρ): Factor by which the gradient is decayed at each time step.

Examples

opt = AdaDelta()

opt = AdaDelta(0.89)
source
Flux.Optimise.AMSGradType
AMSGrad(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

The AMSGrad version of the Adam optimiser. Parameters don't need tuning.

Parameters

  • Learning rate (η): Amount by which gradients are discounted before updating the weights.
  • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

Examples

opt = AMSGrad()

opt = AMSGrad(0.001, (0.89, 0.995))
source
Flux.Optimise.NAdamType
NAdam(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

NAdam is a Nesterov variant of Adam. Parameters don't need tuning.

Parameters

  • Learning rate (η): Amount by which gradients are discounted before updating the weights.
  • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

Examples

opt = NAdam()

opt = NAdam(0.002, (0.89, 0.995))
source
Flux.Optimise.AdamWFunction
AdamW(η = 0.001, β::Tuple = (0.9, 0.999), decay = 0)

AdamW is a variant of Adam fixing (as in repairing) its weight decay regularization.

Parameters

  • Learning rate (η): Amount by which gradients are discounted before updating the weights.
  • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.
  • decay: Decay applied to weights during optimisation.

Examples

opt = AdamW()

opt = AdamW(0.001, (0.89, 0.995), 0.1)
source
Flux.Optimise.OAdamType
OAdam(η = 0.0001, β::Tuple = (0.5, 0.9), ϵ = 1.0e-8)

OAdam (Optimistic Adam) is a variant of Adam adding an "optimistic" term suitable for adversarial training.

Parameters

  • Learning rate (η): Amount by which gradients are discounted before updating the weights.
  • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

Examples

opt = OAdam()

opt = OAdam(0.001, (0.9, 0.995))
source
Flux.Optimise.AdaBeliefType
AdaBelief(η = 0.001, β::Tuple = (0.9, 0.999), ϵ = 1.0e-8)

The AdaBelief optimiser is a variant of the well-known Adam optimiser.

Parameters

  • Learning rate (η): Amount by which gradients are discounted before updating the weights.
  • Decay of momentums (β::Tuple): Exponential decay for the first (β1) and the second (β2) momentum estimate.

Examples

opt = AdaBelief()

opt = AdaBelief(0.001, (0.9, 0.8))
source

Optimiser Interface

Flux's optimisers are built around a struct that holds all the optimiser parameters along with a definition of how to apply the update rule associated with it. We do this via the apply! function which takes the optimiser as the first argument followed by the parameter and its corresponding gradient.

In this manner Flux also allows one to create custom optimisers to be used seamlessly. Let's work on this with a simple example.

mutable struct Momentum
  eta
  rho
  velocity
end

Momentum(eta::Real, rho::Real) = Momentum(eta, rho, IdDict())

The Momentum type will act as our optimiser in this case. Notice that we have added all the parameters as fields, along with the velocity which we will use as our state dictionary. Each parameter in our models will get an entry in there. We can now define the rule applied when this optimiser is invoked.

function Flux.Optimise.apply!(o::Momentum, x, Δ)
  η, ρ = o.eta, o.rho
  v = get!(o.velocity, x, zero(x))::typeof(x)
  @. v = ρ * v - η * Δ
  @. Δ = -v
end

This is the basic definition of a Momentum update rule given by:

\[v = ρ * v - η * Δ w = w - v\]

The apply! defines the update rules for an optimiser opt, given the parameters and gradients. It returns the updated gradients. Here, every parameter x is retrieved from the running state v and subsequently updates the state of the optimiser.

Flux internally calls on this function via the update! function. It shares the API with apply! but ensures that multiple parameters are handled gracefully.

Composing Optimisers

Flux defines a special kind of optimiser simply called Optimiser which takes in arbitrary optimisers as input. Its behaviour is similar to the usual optimisers, but differs in that it acts by calling the optimisers listed in it sequentially. Each optimiser produces a modified gradient that will be fed into the next, and the resultant update will be applied to the parameter as usual. A classic use case is where adding decays is desirable. Flux defines some basic decays including ExpDecay, InvDecay etc.

opt = Optimiser(ExpDecay(1, 0.1, 1000, 1e-4), Descent())

Here we apply exponential decay to the Descent optimiser. The defaults of ExpDecay say that its learning rate will be decayed every 1000 steps. It is then applied like any optimiser.

w = randn(10, 10)
w1 = randn(10,10)
ps = Params([w, w1])

loss(x) = Flux.Losses.mse(w * x, w1 * x)

loss(rand(10)) # around 9

for t = 1:10^5
  θ = Params([w, w1])
  θ̄ = gradient(() -> loss(rand(10)), θ)
  Flux.Optimise.update!(opt, θ, θ̄)
end

loss(rand(10)) # around 0.9

It is possible to compose optimisers for some added flexibility.

Flux.Optimise.OptimiserType
Optimiser(a, b, c...)

Combine several optimisers into one; each optimiser produces a modified gradient that will be fed into the next, and this is finally applied to the parameter as usual.

source

Scheduling Optimisers

In practice, it is fairly common to schedule the learning rate of an optimiser to obtain faster convergence. There are a variety of popular scheduling policies, and you can find implementations of them in ParameterSchedulers.jl. The documentation for ParameterSchedulers.jl provides a more detailed overview of the different scheduling policies, and how to use them with Flux optimizers. Below, we provide a brief snippet illustrating a cosine annealing schedule with a momentum optimiser.

First, we import ParameterSchedulers.jl and initialize a cosine annealing schedule to vary the learning rate between 1e-4 and 1e-2 every 10 steps. We also create a new Momentum optimiser.

using ParameterSchedulers

opt = Momentum()
schedule = Cos(λ0 = 1e-4, λ1 = 1e-2, period = 10)
for (eta, epoch) in zip(schedule, 1:100)
  opt.eta = eta
  # your training code here
end

schedule can also be indexed (e.g. schedule(100)) or iterated like any iterator in Julia.

ParameterSchedulers.jl schedules are stateless (they don't store their iteration state). If you want a stateful schedule, you can use ParameterSchedulers.Stateful:

using ParameterSchedulers: Stateful, next!

schedule = Stateful(Cos(λ0 = 1e-4, λ1 = 1e-2, period = 10))
for epoch in 1:100
  opt.eta = next!(schedule)
  # your training code here
end

ParameterSchedulers.jl allows for many more scheduling policies including arbitrary functions, looping any function with a given period, or sequences of many schedules. See the ParameterSchedulers.jl documentation for more info.

Decays

Similar to optimisers, Flux also defines some simple decays that can be used in conjunction with other optimisers, or standalone.

Flux.Optimise.ExpDecayType
ExpDecay(η = 0.001, decay = 0.1, decay_step = 1000, clip = 1e-4, start = 1)

Discount the learning rate η by the factor decay every decay_step steps till a minimum of clip.

Parameters

  • Learning rate (η): Amount by which gradients are discounted before updating the weights.
  • decay: Factor by which the learning rate is discounted.
  • decay_step: Schedule decay operations by setting the number of steps between two decay operations.
  • clip: Minimum value of learning rate.
  • 'start': Step at which the decay starts.

See also the Scheduling Optimisers section of the docs for more general scheduling techniques.

Examples

ExpDecay is typically composed with other optimizers as the last transformation of the gradient:

opt = Optimiser(Adam(), ExpDecay(1.0))

Note: you may want to start with η=1 in ExpDecay when combined with other optimizers (Adam in this case) that have their own learning rate.

source
Flux.Optimise.InvDecayType
InvDecay(γ = 0.001)

Apply inverse time decay to an optimiser, so that the effective step size at iteration n is eta / (1 + γ * n) where eta is the initial step size. The wrapped optimiser's step size is not modified.

See also the Scheduling Optimisers section of the docs for more general scheduling techniques.

Examples

InvDecay is typically composed with other optimizers as the last transformation of the gradient:

# Inverse decay of the learning rate
# with starting value 0.001 and decay coefficient 0.01.
opt = Optimiser(Adam(1f-3), InvDecay(1f-2))
source
Flux.Optimise.WeightDecayType
WeightDecay(λ = 0)

Decay weights by $λ$. Typically composed with other optimizers as the first transformation to the gradient, making it equivalent to adding $L_2$ regularization with coefficient $λ$ to the loss.

Examples

opt = Optimiser(WeightDecay(1f-4), Adam())
source

Gradient Clipping

Gradient clipping is useful for training recurrent neural networks, which have a tendency to suffer from the exploding gradient problem. An example usage is

opt = Optimiser(ClipValue(1e-3), Adam(1e-3))

Optimisers.jl

Flux re-exports some utility functions from Optimisers.jl and the complete Optimisers package under the Flux.Optimisers namespace.

Optimisers.destructureFunction
destructure(model) -> vector, reconstructor

Copies all trainable, isnumeric parameters in the model to a vector, and returns also a function which reverses this transformation. Differentiable.

Example

julia> v, re = destructure((x=[1.0, 2.0], y=(sin, [3.0 + 4.0im])))
(ComplexF64[1.0 + 0.0im, 2.0 + 0.0im, 3.0 + 4.0im], Restructure(NamedTuple, ..., 3))

julia> re([3, 5, 7+11im])
(x = [3.0, 5.0], y = (sin, ComplexF64[7.0 + 11.0im]))

If model contains various number types, they are promoted to make vector, and are usually restored by Restructure. Such restoration follows the rules of ChainRulesCore.ProjectTo, and thus will restore floating point precision, but will permit more exotic numbers like ForwardDiff.Dual.

If model contains only GPU arrays, then vector will also live on the GPU. At present, a mixture of GPU and ordinary CPU arrays is undefined behaviour.

Optimisers.trainableFunction
trainable(x::Layer) -> NamedTuple

This should be overloaded to make optimisers ignore some fields of every Layer, which would otherwise contain trainable parameters. (Elements such as functions and sizes are always ignored.)

The default is Functors.children(x), usually a NamedTuple of all fields, and trainable(x) must contain a subset of these.

Optimisers.isnumericFunction
isnumeric(x) -> Bool

Returns true on any parameter to be adjusted by Optimisers.jl, namely arrays of non-integer numbers. Returns false on all other types.

Requires also that Functors.isleaf(x) == true, to focus on e.g. the parent of a transposed matrix, not the wrapper.