Graph neural networks (GNNs), and especially message-passing neural networks, excel in various domains such as physics, drug discovery, and molecular modeling. The expressivity of GNNs with respect to their ability to discriminate non-isomorphic graphs critically depends on the functions employed for message aggregation and graph-level readout. By applying signal propagation theory, we propose a variance-preserving aggregation function (VPA) that maintains expressivity, but yields improved forward and backward dynamics. Experiments demonstrate that VPA leads to increased predictive performance for popular GNN architectures as well as improved learning dynamics. Our results could pave the way towards normalizer-free or self-normalizing GNNs.
翻译:图神经网络(GNN),尤其是消息传递神经网络,在物理、药物发现和分子建模等多个领域表现出色。GNN在区分非同构图能力方面的表达能力关键取决于消息聚合和图级读出所采用的函数。通过应用信号传播理论,我们提出了一种方差保持聚合函数(VPA),该函数在保持表达能力的同时,改善了前向和反向动力学。实验表明,VPA能够提升流行GNN架构的预测性能,并改善学习动态。我们的研究结果或为无归一化或自归一化GNN的发展铺平道路。