Recent advances in integrating positional and structural encodings (PSEs) into graph neural networks (GNNs) have significantly enhanced their performance across various graph learning tasks. However, the general applicability of these encodings and their potential to serve as foundational representations for graphs remain uncertain. This paper investigates the fine-tuning efficiency, scalability with sample size, and generalization capability of learnable PSEs across diverse graph datasets. Specifically, we evaluate their potential as universal pre-trained models that can be easily adapted to new tasks with minimal fine-tuning and limited data. Furthermore, we assess the expressivity of the learned representations, particularly, when used to augment downstream GNNs. We demonstrate through extensive benchmarking and empirical analysis that PSEs generally enhance downstream models. However, some datasets may require specific PSE-augmentations to achieve optimal performance. Nevertheless, our findings highlight their significant potential to become integral components of future graph foundation models. We provide new insights into the strengths and limitations of PSEs, contributing to the broader discourse on foundation models in graph learning.
翻译:近期将位置与结构编码(PSEs)整合到图神经网络(GNNs)中的研究进展,显著提升了各类图学习任务的性能。然而,这些编码的普适性及其作为图基础表征的潜力仍不明确。本文系统研究了可学习PSEs在不同图数据集上的微调效率、样本规模可扩展性及泛化能力。具体而言,我们评估了其作为通用预训练模型的潜力——该模型能够通过少量微调和有限数据快速适应新任务。此外,我们分析了所学表征的表达能力,特别是在增强下游GNNs时的表现。通过大量基准测试与实证分析,我们证明PSEs通常能提升下游模型性能,但部分数据集可能需要特定的PSE增强策略以达到最优效果。尽管如此,我们的研究结果凸显了PSEs成为未来图基础模型核心组件的巨大潜力。本文对PSEs的优势与局限提供了新的见解,为图学习领域基础模型的广泛讨论作出贡献。