Convolution Layers
Graph Convolutional Layer
where $\hat{A} = A + I$, $A$ denotes the adjacency matrix, and $\hat{D} = [\hat{d}_{ij}] = \sum_{j=0} [\hat{a}_{ij}]$ is degree matrix.
GeometricFlux.GCNConv
— TypeGCNConv([fg,] in => out, σ=identity; bias=true, init=glorot_uniform)
Graph convolutional layer.
Arguments
fg
: Optionally pass aFeaturedGraph
.in
: The dimension of input features.out
: The dimension of output features.σ
: Activation function.bias
: Add learnable bias.init
: Weights' initializer.
The input to the layer is a node feature array X
of size (num_features, num_nodes)
.
Reference: Semi-supervised Classification with Graph Convolutional Networks
Chebyshev Spectral Graph Convolutional Layer
where $Z^{(k)}$ is the $k$-th term of Chebyshev polynomials, and can be calculated by the following recursive form:
and $\hat{L} = \frac{2}{\lambda_{max}} L - I$.
GeometricFlux.ChebConv
— TypeChebConv([fg,] in=>out, k; bias=true, init=glorot_uniform)
Chebyshev spectral graph convolutional layer.
Arguments
fg
: Optionally pass aFeaturedGraph
.in
: The dimension of input features.out
: The dimension of output features.k
: The order of Chebyshev polynomial.bias
: Add learnable bias.init
: Weights' initializer.
Reference: Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering
Graph Neural Network Layer
GeometricFlux.GraphConv
— TypeGraphConv([fg,] in => out, σ=identity, aggr=+; bias=true, init=glorot_uniform)
Graph neural network layer.
Arguments
fg
: Optionally pass aFeaturedGraph
.in
: The dimension of input features.out
: The dimension of output features.σ
: Activation function.aggr
: An aggregate function applied to the result of message function.+
,-
,
*
, /
, max
, min
and mean
are available.
bias
: Add learnable bias.init
: Weights' initializer.
Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks
Graph Attentional Layer
where the attention coefficient $\alpha_{i,j}$ can be calculated from
GeometricFlux.GATConv
— TypeGATConv([fg,] in => out;
heads=1,
concat=true,
init=glorot_uniform
bias=true,
negative_slope=0.2)
Graph attentional layer.
Arguments
fg
: Optionally pass aFeaturedGraph
.in
: The dimension of input features.out
: The dimension of output features.bias::Bool
: Keyword argument, whether to learn the additive bias.heads
: Number attention headsconcat
: Concatenate layer output or not. If not, layer output is averaged.negative_slope::Real
: Keyword argument, the parameter of LeakyReLU.
Reference: Graph Attention Networks
Gated Graph Convolution Layer
where $\textbf{h}^{(l)}_i$ denotes the $l$-th hidden variables passing through GRU. The dimension of input $\textbf{x}_i$ needs to be less or equal to out
.
GeometricFlux.GatedGraphConv
— TypeGatedGraphConv([fg,] out, num_layers; aggr=+, init=glorot_uniform)
Gated graph convolution layer.
Arguments
fg
: Optionally pass aFeaturedGraph
.out
: The dimension of output features.num_layers
: The number of gated recurrent unit.aggr
: An aggregate function applied to the result of message function.+
,-
,
*
, /
, max
, min
and mean
are available.
Reference: Gated Graph Sequence Neural Networks
Edge Convolutional Layer
where $f_{\Theta}$ denotes a neural network parametrized by $\Theta$, i.e., a MLP.
GeometricFlux.EdgeConv
— TypeEdgeConv([fg,] nn; aggr=max)
Edge convolutional layer.
Arguments
fg
: Optionally pass aFeaturedGraph
.nn
: A neural network (e.g. a Dense layer or a MLP).aggr
: An aggregate function applied to the result of message function.+
,max
andmean
are available.
Reference: Dynamic Graph CNN for Learning on Point Clouds
Graph Isomorphism Network
where $f_{\Theta}$ denotes a neural network parametrized by $\Theta$, i.e., a MLP.
GeometricFlux.GINConv
— TypeGINConv([fg,] nn, [eps=0])
Graph Isomorphism Network.
Arguments
fg
: Optionally pass in a FeaturedGraph as input.nn
: A neural network/layer.eps
: Weighting factor.
The definition of this is as defined in the original paper, Xu et. al. (2018) https://arxiv.org/abs/1810.00826.
Reference: How Powerful are Graph Neural Networks?
Crystal Graph Convolutional Network
where $\textbf{z}_{i,j} = [\textbf{x}_i, \textbf{x}_j}, \textbf{e}_{i,j}]$ denotes the concatenation of node features, neighboring node features, and edge features. The operation $\odot$ represents elementwise multiplication, and $\sigma$ denotes the sigmoid function.
GeometricFlux.CGConv
— TypeCGConv([fg,] (node_dim, edge_dim), out, init, bias=true, as_edge=false)
Crystal Graph Convolutional network. Uses both node and edge features.
Arguments
fg
: Optional [FeaturedGraph
] argument(@ref)node_dim
: Dimensionality of the input node features. Also is necessarily the output dimensionality.edge_dim
: Dimensionality of the input edge features.out
: Dimensionality of the output features.init
: Initialization algorithm for each of the weight matricesbias
: Whether or not to learn an additive bias parameter.as_edge
: When call to layerCGConv(M)
, accept input feature as node features or edge features.
Usage
You can call CGConv
in several different ways:
- Pass a FeaturedGraph:
CGConv(fg)
, returnsFeaturedGraph
- Pass both node and edge features:
CGConv(X, E)
- Pass one matrix, which is determined as node features or edge features by
as_edge
keyword argument.