RMSProp
struct
defined in module
Flux.Optimise
RMSProp(η = 0.001, ρ = 0.9, ϵ = 1.0e-8)
Optimizer using the RMSProp algorithm. Often a good choice for recurrent networks. Parameters other than learning rate generally don't need tuning.
Learning rate (
η
): Amount by which gradients are discounted before updating the weights.
Momentum (
ρ
): Controls the acceleration of gradient descent in the prominent direction, in effect damping oscillations.
opt
=
RMSProp
(
)
opt
=
RMSProp
(
0.002
,
0.95
)
There are
3
methods for Flux.Optimise.RMSProp
:
The following pages link back here:
Flux.jl , deprecations.jl , optimise/Optimise.jl , optimise/optimisers.jl