.Flux
outputsize
function
defined in module
Flux
outputsize(m, inputsize::Tuple; padbatch=false)
Calculate the size of the output from model
m
, given the size of the input. Obeys
outputsize(m, size(x)) == size(m(x))
for valid input
x
.
Keyword
padbatch=true
is equivalent to using
(inputsize..., 1)
, and returns the final size including this extra batch dimension.
This should be faster than calling
size(m(x))
. It uses a trivial number type, which should work out of the box for custom layers.
If
m
is a
Tuple
or
Vector
, its elements are applied in sequence, like
Chain(m...)
.
julia> using Flux: outputsize
julia> outputsize(Dense(10 => 4), (10,); padbatch=true)
(4, 1)
julia> m = Chain(Conv((3, 3), 3 => 16), Conv((3, 3), 16 => 32));
julia> m(randn(Float32, 10, 10, 3, 64)) |> size
(6, 6, 32, 64)
julia> outputsize(m, (10, 10, 3); padbatch=true)
(6, 6, 32, 1)
julia> outputsize(m, (10, 10, 3, 64))
(6, 6, 32, 64)
julia> try outputsize(m, (10, 10, 7, 64)) catch e println(e) end
DimensionMismatch("layer Conv((3, 3), 3 => 16) expects size(input, 3) == 3, but got 10×10×7×64 Array{Flux.NilNumber.Nil, 4}")
julia> outputsize([Dense(10 => 4), Dense(4 => 2)], (10, 1)) # Vector of layers becomes a Chain
(2, 1)
outputsize(m, x_size, y_size, ...; padbatch=false)
For model or layer
m
accepting multiple arrays as input, this returns
size(m((x, y, ...)))
given
size_x = size(x)
, etc.
julia> x, y = rand(Float32, 5, 64), rand(Float32, 7, 64);
julia> par = Parallel(vcat, Dense(5 => 9), Dense(7 => 11));
julia> Flux.outputsize(par, (5, 64), (7, 64))
(20, 64)
julia> m = Chain(par, Dense(20 => 13), softmax);
julia> Flux.outputsize(m, (5,), (7,); padbatch=true)
(13, 1)
julia> par(x, y) == par((x, y)) == Chain(par, identity)((x, y))
true
Notice that
Chain
only accepts multiple arrays as a tuple, while
Parallel
also accepts them as multiple arguments;
outputsize
always supplies the tuple.
There are
3
methods for Flux.outputsize
:
The following pages link back here:
Custom learning tasks, Keypoint regression, Siamese image similarity