Basic Layers
These core layers form the foundation of almost all neural networks.
Flux.Chain
— TypeChain(layers...)
Chain multiple layers / functions together, so that they are called in sequence on a given input.
Chain
also supports indexing and slicing, e.g. m[2]
or m[1:end-1]
. m[1:3](x)
will calculate the output of the first three layers.
Examples
julia> m = Chain(x -> x^2, x -> x+1);
julia> m(5) == 26
true
julia> m = Chain(Dense(10, 5), Dense(5, 2));
julia> x = rand(10);
julia> m(x) == m[2](m[1](x))
true
Flux.Dense
— TypeDense(in::Integer, out::Integer, σ = identity)
Create a traditional Dense
layer with parameters W
and b
.
y = σ.(W * x .+ b)
The input x
must be a vector of length in
, or a batch of vectors represented as an in × N
matrix. The out y
will be a vector or batch of length out
.
Example
julia> d = Dense(5, 2)
Dense(5, 2)
julia> d(rand(5))
2-element Array{Float32,1}:
-0.16210233
0.123119034
Convolution and Pooling Layers
These layers are used to build convolutional neural networks (CNNs).
Flux.Conv
— TypeConv(filter, in => out, σ = identity; init = glorot_uniform,
stride = 1, pad = 0, dilation = 1)
filter = (2,2)
in = 1
out = 16
Conv((2, 2), 1=>16, relu)
Standard convolutional layer. filter
should be a tuple like (2, 2)
. in
and out
specify the number of input and output channels respectively.
Data should be stored in WHCN order (width, height, # channels, batch size). In other words, a 100×100 RGB image would be a 100×100×3×1
array, and a batch of 50 would be a 100×100×3×50
array.
Accepts keyword arguments weight
and bias
to set the corresponding fields. Setting bias
to Flux.Zeros()
will switch bias off for the layer.
Takes the keyword arguments pad
, stride
and dilation
. For input dimension N, pad
should be a single Integer indicating equal padding value for each spatial dimension, a tuple of length (N-2) to apply symmetric padding or a tuple of length 2*(N-2) indicating padding values for each spatial dimension at both the ends. stride
and dilation
should be a single Integer or a tuple with N-2 parameters. Use pad=SamePad()
to apply padding so that outputsize == inputsize / stride.
Examples
Apply a Conv
layer to a 1-channel input using a 2×2 window filter size, giving us a 16-channel output. Output is activated with ReLU.
filter = (2,2)
in = 1
out = 16
Conv(filter, in => out, relu)
Flux.AdaptiveMaxPool
— TypeAdaptiveMaxPool(out)
Adaptive max pooling layer. out
is the desired output size (batch and channel dimension excluded).
Flux.MaxPool
— TypeMaxPool(k; pad = 0, stride = k)
Max pooling layer. k
is the size of the window for each dimension of the input.
Use pad=SamePad()
to apply padding so that outputsize == inputsize / stride.
Flux.GlobalMaxPool
— TypeGlobalMaxPool()
Global max pooling layer.
Transforms (w,h,c,b)-shaped input into (1,1,c,b)-shaped output, by performing max pooling on the complete (w,h)-shaped feature maps.
Flux.AdaptiveMeanPool
— TypeAdaptiveMeanPool(out)
Adaptive mean pooling layer. out
is the desired output size (batch and channel dimension excluded).
Flux.MeanPool
— TypeMeanPool(k; pad = 0, stride = k)
Mean pooling layer. k
is the size of the window for each dimension of the input.
Use pad=SamePad()
to apply padding so that outputsize == inputsize / stride.
Flux.GlobalMeanPool
— TypeGlobalMeanPool()
Global mean pooling layer.
Transforms (w,h,c,b)-shaped input into (1,1,c,b)-shaped output, by performing mean pooling on the complete (w,h)-shaped feature maps.
Flux.DepthwiseConv
— TypeDepthwiseConv(filter::Tuple, in=>out)
DepthwiseConv(filter::Tuple, in=>out, activation)
DepthwiseConv(filter, in => out, σ = identity; init = glorot_uniform,
stride = 1, pad = 0, dilation = 1)
Depthwise convolutional layer. filter
should be a tuple like (2, 2)
. in
and out
specify the number of input and output channels respectively. Note that out
must be an integer multiple of in
.
Data should be stored in WHCN order (width, height, # channels, batch size). In other words, a 100×100 RGB image would be a 100×100×3×1
array, and a batch of 50 would be a 100×100×3×50
array.
Accepts keyword arguments weight
and bias
to set the corresponding fields. Setting bias
to Flux.Zeros()
will switch bias off for the layer.
Takes the keyword arguments pad
, stride
and dilation
. For input dimension N, pad
should be a single Integer indicating equal padding value for each spatial dimension, a tuple of length (N-2) to apply symmetric padding or a tuple of length 2*(N-2) indicating padding values for each spatial dimension at both the ends. stride
and dilation
should be a single Integer or a tuple with N-2 parameters. Use pad=SamePad()
to apply padding so that outputsize == inputsize / stride.
Flux.ConvTranspose
— TypeConvTranspose(filter, in=>out)
ConvTranspose(filter, in=>out, activation)
ConvTranspose(filter, in => out, σ = identity; init = glorot_uniform,
stride = 1, pad = 0, dilation = 1)
Standard convolutional transpose layer. filter
should be a tuple like (2, 2)
. in
and out
specify the number of input and output channels respectively.
Data should be stored in WHCN order (width, height, # channels, batch size). In other words, a 100×100 RGB image would be a 100×100×3×1
array, and a batch of 50 would be a 100×100×3×50
array.
Accepts keyword arguments weight
and bias
to set the corresponding fields. Setting bias
to Flux.Zeros()
will switch bias off for the layer.
Takes the keyword arguments pad
, stride
and dilation
. For input dimension N, pad
should be a single Integer indicating equal padding value for each spatial dimension, a tuple of length (N-2) to apply symmetric padding or a tuple of length 2*(N-2) indicating padding values for each spatial dimension at both the ends. stride
and dilation
should be a single Integer or a tuple with N-2 parameters. Use pad=SamePad()
to apply padding so that outputsize == stride * inputsize.
Flux.CrossCor
— TypeCrossCor(filter, in=>out)
CrossCor(filter, in=>out, activation)
CrossCor(filter, in => out, σ = identity; init = glorot_uniform,
stride = 1, pad = 0, dilation = 1)
Standard cross convolutional layer. filter
should be a tuple like (2, 2)
. in
and out
specify the number of input and output channels respectively.
Data should be stored in WHCN order (width, height, # channels, batch size). In other words, a 100×100 RGB image would be a 100×100×3×1
array, and a batch of 50 would be a 100×100×3×50
array.
Accepts keyword arguments weight
and bias
to set the corresponding fields. Setting bias
to Flux.Zeros()
will switch bias off for the layer.
Takes the keyword arguments pad
, stride
and dilation
. For input dimension N, pad
should be a single Integer indicating equal padding value for each spatial dimension, a tuple of length (N-2) to apply symmetric padding or a tuple of length 2*(N-2) indicating padding values for each spatial dimension at both the ends. stride
and dilation
should be a single Integer or a tuple with N-2 parameters. Use pad=SamePad()
to apply padding so that outputsize == inputsize / stride.
Examples
Apply a CrossCor
layer to a 1-channel input using a 2×2 window filter size, giving us a 16-channel output. Output is activated with ReLU.
filter = (2,2)
in = 1
out = 16
CrossCor((2, 2), 1=>16, relu)
Flux.SamePad
— TypeSamePad
Padding for convolutional layers will be calculated so that outputshape == inputshape when stride = 1.
For stride > 1 the output shape depends on the type of convolution layer.
Flux.flatten
— Functionflatten(x::AbstractArray)
Reshape arbitrarly-shaped input into a matrix-shaped output preserving the last dimension size. Equivalent to reshape(x, :, size(x)[end])
.
Flux.Zeros
— TypeZeros()
Zeros(size...)
Zeros(Type, size...)
Acts as a stand-in for an array of zeros that can be used during training which is ignored by the optimisers.
Useful to turn bias off for a forward pass of a layer.
Examples
julia> Flux.Zeros(3,3)
3×3 Flux.Zeros{Bool,2}:
false false false
false false false
false false false
julia> Flux.Zeros(Float32, 3,3)
3×3 Flux.Zeros{Float32,2}:
0.0 0.0 0.0
0.0 0.0 0.0
0.0 0.0 0.0
julia> rand(3,3) .+ Flux.Zeros()
3×3 Array{Float64,2}:
0.198739 0.490459 0.785386
0.779074 0.39986 0.66383
0.854981 0.447292 0.314497
julia> bias_less_conv = Conv((2,2), 1=>3, bias = Flux.Zeros())
Conv((2, 2), 1=>3)
Flux.convfilter
— Functionconvfilter(filter::Tuple, in=>out)
Constructs a standard convolutional weight matrix with given filter
and channels from in
to out
.
Accepts the keyword init
(default: glorot_uniform
) to control the sampling distribution.
See also: depthwiseconvfilter
Flux.depthwiseconvfilter
— Functiondepthwiseconvfilter(filter::Tuple, in=>out)
Constructs a depthwise convolutional weight array defined by filter
and channels from in
to out
.
Accepts the keyword init
(default: glorot_uniform
) to control the sampling distribution.
See also: convfilter
Recurrent Layers
Much like the core layers above, but can be used to process sequence data (as well as other kinds of structured data).
Flux.RNN
— FunctionRNN(in::Integer, out::Integer, σ = tanh)
The most basic recurrent layer; essentially acts as a Dense
layer, but with the output fed back into the input each time step.
Flux.LSTM
— FunctionLSTM(in::Integer, out::Integer)
Long Short Term Memory recurrent layer. Behaves like an RNN but generally exhibits a longer memory span over sequences.
See this article for a good overview of the internals.
Flux.GRU
— FunctionGRU(in::Integer, out::Integer)
Gated Recurrent Unit layer. Behaves like an RNN but generally exhibits a longer memory span over sequences.
See this article for a good overview of the internals.
Flux.Recur
— TypeRecur(cell)
Recur
takes a recurrent cell and makes it stateful, managing the hidden state in the background. cell
should be a model of the form:
h, y = cell(h, x...)
For example, here's a recurrent network that keeps a running total of its inputs:
accum(h, x) = (h + x, x)
rnn = Flux.Recur(accum, 0)
rnn(2) # 2
rnn(3) # 3
rnn.state # 5
rnn.(1:10) # apply to a sequence
rnn.state # 60
Flux.reset!
— Functionreset!(rnn)
Reset the hidden state of a recurrent layer back to its original value.
Assuming you have a Recur
layer rnn
, this is roughly equivalent to:
rnn.state = hidden(rnn.cell)
Other General Purpose Layers
These are marginally more obscure than the Basic Layers. But in contrast to the layers described in the other sections are not readily grouped around a particular purpose (e.g. CNNs or RNNs).
Flux.Maxout
— TypeMaxout(over)
The Maxout layer has a number of internal layers which all receive the same input. It returns the elementwise maximum of the internal layers' outputs.
Maxout over linear dense layers satisfies the univeral approximation theorem.
Flux.SkipConnection
— TypeSkipConnection(layer, connection)
Create a skip connection which consists of a layer or Chain
of consecutive layers and a shortcut connection linking the block's input to the output through a user-supplied 2-argument callable. The first argument to the callable will be propagated through the given layer
while the second is the unchanged, "skipped" input.
The simplest "ResNet"-type connection is just SkipConnection(layer, +)
, and requires the output of the layers to be the same shape as the input. Here is a more complicated example:
m = Conv((3,3), 4=>7, pad=(1,1))
x = ones(5,5,4,10);
size(m(x)) == (5, 5, 7, 10)
sm = SkipConnection(m, (mx, x) -> cat(mx, x, dims=3))
size(sm(x)) == (5, 5, 11, 10)
Normalisation & Regularisation
These layers don't affect the structure of the network but may improve training times or reduce overfitting.
Flux.normalise
— Functionnormalise(x; dims, ϵ=1e-5)
Normalise x
to mean 0 and standard deviation 1 across the dimensions given by dims
. ϵ
is a small additive factor added to the denominator for numerical stability.
Flux.BatchNorm
— TypeBatchNorm(channels::Integer, σ = identity;
initβ = zeros, initγ = ones,
ϵ = 1e-8, momentum = .1)
Batch Normalization layer. channels
should be the size of the channel dimension in your data (see below).
Given an array with N
dimensions, call the N-1
th the channel dimension. (For a batch of feature vectors this is just the data dimension, for WHCN
images it's the usual channel dimension.)
BatchNorm
computes the mean and variance for each each W×H×1×N
slice and shifts them to have a new mean and variance (corresponding to the learnable, per-channel bias
and scale
parameters).
Use testmode!
during inference.
Examples
m = Chain(
Dense(28^2, 64),
BatchNorm(64, relu),
Dense(64, 10),
BatchNorm(10),
softmax)
Flux.dropout
— Functiondropout(x, p; dims=:, active::Bool)
The dropout function. If active
is true
, for each input, either sets that input to 0
(with probability p
) or scales it by 1 / (1 - p)
. dims
specifies the unbroadcasted dimensions, e.g. dims=1
applies dropout along columns and dims=2
along rows. This is used as a regularisation, i.e. it reduces overfitting during training.
If active
is false
, it just returns the input x
Warning: when using this function, you have to manually manage the activation state. Usually in fact, dropout is used while training but is deactivated in the inference phase. This can be automatically managed using the Dropout
layer instead of the dropout
function.
The Dropout
layer is what you should use in most scenarios.
Flux.Dropout
— TypeDropout(p, dims = :)
Dropout layer. In the forward pass, apply the Flux.dropout
function on the input.
Does nothing to the input once Flux.testmode!
is true
.
Flux.AlphaDropout
— TypeAlphaDropout(p)
A dropout layer. Used in Self-Normalizing Neural Networks. The AlphaDropout layer ensures that mean and variance of activations remain the same as before.
Does nothing to the input once testmode!
is true.
Flux.LayerNorm
— TypeLayerNorm(h::Integer)
A normalisation layer designed to be used with recurrent hidden states of size h
. Normalises the mean and standard deviation of each input before applying a per-neuron gain/bias.
Flux.InstanceNorm
— TypeInstanceNorm(channels::Integer, σ = identity;
initβ = zeros, initγ = ones,
ϵ = 1e-8, momentum = .1)
Instance Normalization layer. channels
should be the size of the channel dimension in your data (see below).
Given an array with N
dimensions, call the N-1
th the channel dimension. (For a batch of feature vectors this is just the data dimension, for WHCN
images it's the usual channel dimension.)
InstanceNorm
computes the mean and variance for each each W×H×1×1
slice and shifts them to have a new mean and variance (corresponding to the learnable, per-channel bias
and scale
parameters).
Use testmode!
during inference.
Examples
m = Chain(
Dense(28^2, 64),
InstanceNorm(64, relu),
Dense(64, 10),
InstanceNorm(10),
softmax)
Flux.GroupNorm
— TypeGroupNorm(chs::Integer, G::Integer, λ = identity;
initβ = (i) -> zeros(Float32, i), initγ = (i) -> ones(Float32, i),
ϵ = 1f-5, momentum = 0.1f0)
Group Normalization layer. This layer can outperform Batch Normalization and Instance Normalization.
chs
is the number of channels, the channel dimension of your input. For an array of N dimensions, the N-1
th index is the channel dimension.
G
is the number of groups along which the statistics are computed. The number of channels must be an integer multiple of the number of groups.
Use testmode!
during inference.
Examples
m = Chain(Conv((3,3), 1=>32, leakyrelu;pad = 1),
GroupNorm(32,16))
# 32 channels, 16 groups (G = 16), thus 2 channels per group used
Testmode
Many normalisation layers behave differently under training and inference (testing). By default, Flux will automatically determine when a layer evaluation is part of training or inference. Still, depending on your use case, it may be helpful to manually specify when these layers should be treated as being trained or not. For this, Flux provides Flux.testmode!
. When called on a model (e.g. a layer or chain of layers), this function will place the model into the mode specified.
Flux.testmode!
— Functiontestmode!(m, mode = true)
Set a layer or model's test mode (see below). Using :auto
mode will treat any gradient computation as training.
Note: if you manually set a model into test mode, you need to manually place it back into train mode during training phase.
Possible values include:
false
for trainingtrue
for testing:auto
ornothing
for Flux to detect the mode automatically
Flux.trainmode!
— Functiontrainmode!(m, mode = true)
Set a layer of model's train mode (see below). Symmetric to testmode!
(i.e. trainmode!(m, mode) == testmode!(m, !mode)
).
Note: if you manually set a model into train mode, you need to manually place it into test mode during testing phase.
Possible values include:
true
for trainingfalse
for testing:auto
ornothing
for Flux to detect the mode automatically