Dense
	
struct defined in module 
	Flux
			Dense(in => out, σ=identity; bias=true, init=glorot_uniform)
Dense(W::AbstractMatrix, [bias, σ])
Create a traditional fully connected layer, whose forward pass is given by:
			y = σ.(W * x .+ bias)
			The input 
			x should be a vector of length 
			in, or batch of vectors represented as an 
			in × N matrix, or any array with 
			size(x,1) == in. The out 
			y will be a vector  of length 
			out, or a batch with 
			size(y) == (out, size(x)[2:end]...)
			Keyword 
			bias=false will switch off trainable bias for the layer. The initialisation of the weight matrix is 
			W = init(out, in), calling the function given to keyword 
			init, with default [
			glorot_uniform]( Flux.glorot_uniform). The weight matrix and/or the bias vector (of length 
			out) may also be provided explicitly.
			julia> d = Dense(5 => 2)
Dense(5 => 2)       # 12 parameters
julia> d(rand(Float32, 5, 64)) |> size
(2, 64)
julia> d(rand(Float32, 5, 1, 1, 64)) |> size  # treated as three batch dimensions
(2, 1, 1, 64)
julia> d1 = Dense(ones(2, 5), false, tanh)  # using provided weight matrix
Dense(5 => 2, tanh; bias=false)  # 10 parameters
julia> d1(ones(5))
2-element Vector{Float64}:
 0.9999092042625951
 0.9999092042625951
julia> Flux.params(d1)  # no trainable bias
Params([[1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]])
There are
			4
			methods for Flux.Dense:
		
The following pages link back here:
Custom learning tasks, Variational autoencoders
training/discriminativelrs.jl , training/paramgroups.jl , models/blocks.jl , Flux.jl , deprecations.jl , layers/basic.jl , layers/show.jl , outputsize.jl