ConvTranspose
struct
defined in module
Flux
ConvTranspose(filter, in => out, σ=identity; stride=1, pad=0, dilation=1, [bias, init])
Standard convolutional transpose layer.
filter
is a tuple of integers specifying the size of the convolutional kernel, while
in
and
out
specify the number of input and output channels.
Note that
pad=SamePad()
here tries to ensure
size(output,d) == size(x,d) * stride
.
Parameters are controlled by additional keywords, with defaults
init=glorot_uniform
and
bias=true
.
See also
Conv
for more detailed description of keywords.
julia> xs = rand32(100, 100, 3, 50); # a batch of 50 RGB images
julia> layer = ConvTranspose((5,5), 3 => 7, relu)
ConvTranspose((5, 5), 3 => 7, relu) # 532 parameters
julia> layer(xs) |> size
(104, 104, 7, 50)
julia> ConvTranspose((5,5), 3 => 7, stride=2)(xs) |> size
(203, 203, 7, 50)
julia> ConvTranspose((5,5), 3 => 7, stride=3, pad=SamePad())(xs) |> size
(300, 300, 7, 50)
ConvTranspose(weight::AbstractArray, [bias, activation; stride, pad, dilation, groups])
Constructs a ConvTranspose layer with the given weight and bias. Accepts the same keywords and has the same defaults as [
ConvTranspose(k::NTuple{N,Integer}, ch::Pair{<:Integer,<:Integer}, σ; ...)
]( ConvTranspose).
julia> weight = rand(3, 4, 5);
julia> bias = zeros(4);
julia> layer = ConvTranspose(weight, bias, sigmoid)
ConvTranspose((3,), 5 => 4, σ) # 64 parameters
julia> layer(randn(100, 5, 64)) |> size # transposed convolution will increase the dimension size (upsampling)
(102, 4, 64)
julia> Flux.params(layer) |> length
2
There are
3
methods for Flux.ConvTranspose
:
The following pages link back here: