Convolution Layers

Graph Convolutional Layer

\[X' = \sigma(\hat{D}^{-1/2} \hat{A} \hat{D}^{-1/2} X \Theta)\]

where $\hat{A} = A + I$, $A$ denotes the adjacency matrix, and $\hat{D} = [\hat{d}_{ij}] = \sum_{j=0} [\hat{a}_{ij}]$ is degree matrix.

GeometricFlux.GCNConvType
GCNConv([fg,] in => out, σ=identity; bias=true, init=glorot_uniform)

Graph convolutional layer.

Arguments

  • fg: Optionally pass a FeaturedGraph.
  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • bias: Add learnable bias.
  • init: Weights' initializer.

The input to the layer is a node feature array X of size (num_features, num_nodes).

source

Reference: Semi-supervised Classification with Graph Convolutional Networks


Chebyshev Spectral Graph Convolutional Layer

\[X' = \sum^{K-1}_{k=0} Z^{(k)} \Theta^{(k)}\]

where $Z^{(k)}$ is the $k$-th term of Chebyshev polynomials, and can be calculated by the following recursive form:

\[Z^{(0)} = X \\ Z^{(1)} = \hat{L} X \\ Z^{(k)} = 2 \hat{L} Z^{(k-1)} - Z^{(k-2)}\]

and $\hat{L} = \frac{2}{\lambda_{max}} L - I$.

GeometricFlux.ChebConvType
ChebConv([fg,] in=>out, k; bias=true, init=glorot_uniform)

Chebyshev spectral graph convolutional layer.

Arguments

  • fg: Optionally pass a FeaturedGraph.
  • in: The dimension of input features.
  • out: The dimension of output features.
  • k: The order of Chebyshev polynomial.
  • bias: Add learnable bias.
  • init: Weights' initializer.
source

Reference: Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering


Graph Neural Network Layer

\[\textbf{x}_i' = \sigma (\Theta_1 \textbf{x}_i + \sum_{j \in \mathcal{N}(i)} \Theta_2 \textbf{x}_j)\]
GeometricFlux.GraphConvType
GraphConv([fg,] in => out, σ=identity, aggr=+; bias=true, init=glorot_uniform)

Graph neural network layer.

Arguments

  • fg: Optionally pass a FeaturedGraph.
  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: An aggregate function applied to the result of message function. +, -,

*, /, max, min and mean are available.

  • bias: Add learnable bias.
  • init: Weights' initializer.
source

Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks


Graph Attentional Layer

\[\textbf{x}_i' = \alpha_{i,i} \Theta \textbf{x}_i + \sum_{j \in \mathcal{N}(i)} \alpha_{i,j} \Theta \textbf{x}_j\]

where the attention coefficient $\alpha_{i,j}$ can be calculated from

\[\alpha_{i,j} = \frac{exp(LeakyReLU(\textbf{a}^T [\Theta \textbf{x}_i || \Theta \textbf{x}_j]))}{\sum_{k \in \mathcal{N}(i) \cup \{i\}} exp(LeakyReLU(\textbf{a}^T [\Theta \textbf{x}_i || \Theta \textbf{x}_k]))}\]
GeometricFlux.GATConvType
GATConv([fg,] in => out;
        heads=1,
        concat=true,
        init=glorot_uniform    
        bias=true, 
        negative_slope=0.2)

Graph attentional layer.

Arguments

  • fg: Optionally pass a FeaturedGraph.
  • in: The dimension of input features.
  • out: The dimension of output features.
  • bias::Bool: Keyword argument, whether to learn the additive bias.
  • heads: Number attention heads
  • concat: Concatenate layer output or not. If not, layer output is averaged.
  • negative_slope::Real: Keyword argument, the parameter of LeakyReLU.
source

Reference: Graph Attention Networks


Gated Graph Convolution Layer

\[\textbf{h}^{(0)}_i = \textbf{x}_i || \textbf{0} \\ \textbf{h}^{(l)}_i = GRU(\textbf{h}^{(l-1)}_i, \sum_{j \in \mathcal{N}(i)} \Theta \textbf{h}^{(l-1)}_j)\]

where $\textbf{h}^{(l)}_i$ denotes the $l$-th hidden variables passing through GRU. The dimension of input $\textbf{x}_i$ needs to be less or equal to out.

GeometricFlux.GatedGraphConvType
GatedGraphConv([fg,] out, num_layers; aggr=+, init=glorot_uniform)

Gated graph convolution layer.

Arguments

  • fg: Optionally pass a FeaturedGraph.
  • out: The dimension of output features.
  • num_layers: The number of gated recurrent unit.
  • aggr: An aggregate function applied to the result of message function. +, -,

*, /, max, min and mean are available.

source

Reference: Gated Graph Sequence Neural Networks


Edge Convolutional Layer

\[\textbf{x}_i' = \sum_{j \in \mathcal{N}(i)} f_{\Theta}(\textbf{x}_i || \textbf{x}_j - \textbf{x}_i)\]

where $f_{\Theta}$ denotes a neural network parametrized by $\Theta$, i.e., a MLP.

GeometricFlux.EdgeConvType
EdgeConv([fg,] nn; aggr=max)

Edge convolutional layer.

Arguments

  • fg: Optionally pass a FeaturedGraph.
  • nn: A neural network (e.g. a Dense layer or a MLP).
  • aggr: An aggregate function applied to the result of message function. +, max and mean are available.
source

Reference: Dynamic Graph CNN for Learning on Point Clouds


Graph Isomorphism Network

\[\textbf{x}_i' = f_{\Theta}\left((1 + \varepsilon) \dot \textbf{x}_i + \sum_{j \in \mathcal{N}(i)} \textbf{x}_j \right)\]

where $f_{\Theta}$ denotes a neural network parametrized by $\Theta$, i.e., a MLP.

GeometricFlux.GINConvType
GINConv([fg,] nn, [eps=0])

Graph Isomorphism Network.

Arguments

  • fg: Optionally pass in a FeaturedGraph as input.
  • nn: A neural network/layer.
  • eps: Weighting factor.

The definition of this is as defined in the original paper, Xu et. al. (2018) https://arxiv.org/abs/1810.00826.

source

Reference: How Powerful are Graph Neural Networks?

Crystal Graph Convolutional Network

\[\textbf{x}_i' = \textbf{x}_i + \sum_{j \in \mathcal{N}(i)} \sigma\left( \textbf{z}_{i,j} \textbf{W}_f + \textbf{b}_f \right) \odot \text{softplus}\left(\textbf{z}_{i,j} \textbf{W}_s + \textbf{b}_s \right)\]

where $\textbf{z}_{i,j} = [\textbf{x}_i, \textbf{x}_j}, \textbf{e}_{i,j}]$ denotes the concatenation of node features, neighboring node features, and edge features. The operation $\odot$ represents elementwise multiplication, and $\sigma$ denotes the sigmoid function.

GeometricFlux.CGConvType
CGConv([fg,] (node_dim, edge_dim), out, init, bias=true, as_edge=false)

Crystal Graph Convolutional network. Uses both node and edge features.

Arguments

  • fg: Optional [FeaturedGraph] argument(@ref)
  • node_dim: Dimensionality of the input node features. Also is necessarily the output dimensionality.
  • edge_dim: Dimensionality of the input edge features.
  • out: Dimensionality of the output features.
  • init: Initialization algorithm for each of the weight matrices
  • bias: Whether or not to learn an additive bias parameter.
  • as_edge: When call to layer CGConv(M), accept input feature as node features or edge features.

Usage

You can call CGConv in several different ways:

  • Pass a FeaturedGraph: CGConv(fg), returns FeaturedGraph
  • Pass both node and edge features: CGConv(X, E)
  • Pass one matrix, which is determined as node features or edge features by as_edge keyword argument.
source

Reference: Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties