Graph neural networks (GNNs) provide state-of-the-art results in a wide variety of tasks which typically involve predicting features at the vertices of a graph. They are built from layers of graph convolutions which serve as a powerful inductive bias for describing the flow of information among the vertices. Often, more than one data modality is available. This work considers a setting in which several graphs have the same vertex set and a common vertex-level learning task. This generalizes standard GNN models to GNNs with several graph operators that do not commute. We may call this model graph-tuple neural networks (GtNN). In this work, we develop the mathematical theory to address the stability and transferability of GtNNs using properties of non-commuting non-expansive operators. We develop a limit theory of graphon-tuple neural networks and use it to prove a universal transferability theorem that guarantees that all graph-tuple neural networks are transferable on convergent graph-tuple sequences. In particular, there is no non-transferable energy under the convergence we consider here. Our theoretical results extend well-known transferability theorems for GNNs to the case of several simultaneous graphs (GtNNs) and provide a strict improvement on what is currently known even in the GNN case. We illustrate our theoretical results with simple experiments on synthetic and real-world data. To this end, we derive a training procedure that provably enforces the stability of the resulting model.
翻译:图神经网络(GNNs)在各类任务中取得了最先进的成果,这些任务通常涉及预测图中顶点的特征。GNN由多层图卷积构成,这些卷积层作为强大的归纳偏置,用于描述顶点间信息的流动。在实际应用中,往往存在多种数据模态。本文研究一种场景:多个图共享相同的顶点集,并具有共同的顶点级学习任务。这便将标准GNN模型推广至包含多个非交换图算子的GNN,我们可称此模型为图元组神经网络(GtNN)。本文发展了基于非交换非扩张算子性质的数学理论,以解决GtNN的稳定性与可迁移性问题。我们建立了图元组神经网络极限理论,并利用该理论证明了一个普适的可迁移性定理,该定理保证所有图元组神经网络在收敛的图元组序列上都是可迁移的。特别地,在我们所考虑的收敛性框架下,不存在不可迁移的能量。我们的理论结果将GNN的经典可迁移性定理推广至多个图同时存在的情形(GtNN),并且即使在GNN情形下,也对现有已知结论提供了严格改进。我们通过在合成数据与真实数据上的简单实验验证了理论结果。为此,我们推导出一种训练方法,该方法可证明地确保所得模型的稳定性。