Recent advances in integrating positional and structural encodings (PSEs) into graph neural networks (GNNs) have significantly enhanced their performance across various graph learning tasks. However, the general applicability of these encodings and their potential to serve as foundational representations for graphs remain uncertain. This paper investigates the fine-tuning efficiency, scalability with sample size, and generalization capability of learnable PSEs across diverse graph datasets. Specifically, we evaluate their potential as universal pre-trained models that can be easily adapted to new tasks with minimal fine-tuning and limited data. Furthermore, we assess the expressivity of the learned representations, particularly, when used to augment downstream GNNs. We demonstrate through extensive benchmarking and empirical analysis that PSEs generally enhance downstream models. However, some datasets may require specific PSE-augmentations to achieve optimal performance. Nevertheless, our findings highlight their significant potential to become integral components of future graph foundation models. We provide new insights into the strengths and limitations of PSEs, contributing to the broader discourse on foundation models in graph learning.
翻译:近年来,将位置与结构编码(PSEs)整合到图神经网络(GNNs)中的研究进展,显著提升了其在各类图学习任务中的性能。然而,这些编码的普遍适用性及其作为图基础表征的潜力仍不明确。本文研究了可学习PSEs在不同图数据集上的微调效率、样本规模可扩展性及泛化能力。具体而言,我们评估了其作为通用预训练模型的潜力,该模型能够通过少量微调和有限数据轻松适应新任务。此外,我们评估了所学表征的表达能力,特别是在用于增强下游GNNs时的表现。通过广泛的基准测试和实证分析,我们证明PSEs通常能够增强下游模型。然而,某些数据集可能需要特定的PSE增强策略才能达到最佳性能。尽管如此,我们的研究结果凸显了PSEs成为未来图基础模型核心组成部分的巨大潜力。本文为PSEs的优势与局限性提供了新的见解,为图学习领域基础模型的广泛讨论作出了贡献。