Graph neural networks (GNNs) have become powerful tools for processing graph-based information in various domains. A desirable property of GNNs is transferability, where a trained network can swap in information from a different graph without retraining and retain its accuracy. A recent method of capturing transferability of GNNs is through the use of graphons, which are symmetric, measurable functions representing the limit of large dense graphs. In this work, we contribute to the application of graphons to GNNs by presenting an explicit two-layer graphon neural network (WNN) architecture. We prove its ability to approximate bandlimited graphon signals within a specified error tolerance using a minimal number of network weights. We then leverage this result, to establish the transferability of an explicit two-layer GNN over all sufficiently large graphs in a convergent sequence. Our work addresses transferability between both deterministic weighted graphs and simple random graphs and overcomes issues related to the curse of dimensionality that arise in other GNN results. The proposed WNN and GNN architectures offer practical solutions for handling graph data of varying sizes while maintaining performance guarantees without extensive retraining.
翻译:图神经网络(GNNs)已成为处理各领域图结构信息的强大工具。GNNs的一个重要特性是可迁移性,即训练好的网络能够在不重新训练的情况下适应不同图结构的信息并保持其准确性。近期一种刻画GNN可迁移性的方法是通过图核(graphon)实现,图核是对称可测函数,用于表示大型稠密图的极限。本研究通过提出一种显式的双层图核神经网络(WNN)架构,推进了图核在GNN中的应用。我们证明了该架构能够以最少的网络权重,在指定误差容限内逼近带限图核信号。基于此结果,我们进一步建立了显式双层GNN在所有足够大的收敛图序列上的可迁移性。我们的工作解决了确定性加权图与简单随机图之间的可迁移性问题,并克服了其他GNN研究中出现的维度灾难问题。所提出的WNN与GNN架构为处理不同规模的图数据提供了实用解决方案,在无需大量重新训练的情况下仍能保持性能保证。