Momentum
struct
defined in module
Flux.Optimise
Momentum(η = 0.01, ρ = 0.9)
Gradient descent optimiser with learning rate
η
and momentum
ρ
.
Learning rate (
η
): Amount by which gradients are discounted before updating the weights.
Momentum (
ρ
): Controls the acceleration of gradient descent in the prominent direction, in effect damping oscillations.
opt
=
Momentum
(
)
opt
=
Momentum
(
0.01
,
0.99
)
There are
2
methods for Flux.Optimise.Momentum
:
The following pages link back here:
Flux.jl , deprecations.jl , optimise/Optimise.jl , optimise/optimisers.jl