Graph neural networks stand as the predominant technique for graph representation learning owing to their strong expressive power, yet the performance highly depends on the availability of high-quality labels in an end-to-end manner. Thus the pretraining and fine-tuning paradigm has been proposed to mitigate the label cost issue. Subsequently, the gap between the pretext tasks and downstream tasks has spurred the development of graph prompt learning which inserts a set of graph prompts into the original graph data with minimal parameters while preserving competitive performance. However, the current exploratory works are still limited since they all concentrate on learning fixed task-specific prompts which may not generalize well across the diverse instances that the task comprises. To tackle this challenge, we introduce Instance-Aware Graph Prompt Learning (IA-GPL) in this paper, aiming to generate distinct prompts tailored to different input instances. The process involves generating intermediate prompts for each instance using a lightweight architecture, quantizing these prompts through trainable codebook vectors, and employing the exponential moving average technique to ensure stable training. Extensive experiments conducted on multiple datasets and settings showcase the superior performance of IA-GPL compared to state-of-the-art baselines.
翻译:图神经网络因其强大的表达能力而成为图表示学习的主流技术,但其性能高度依赖于端到端方式下高质量标签的可用性。因此,预训练与微调范式被提出以缓解标签成本问题。随后,预训练任务与下游任务之间的差距推动了图提示学习的发展,该方法通过向原始图数据中插入一组图提示,以最少的参数量保持具有竞争力的性能。然而,当前探索性工作仍存在局限,因为它们均集中于学习固定的任务特定提示,而这些提示可能无法在任务所包含的多样化实例中良好泛化。为应对这一挑战,本文提出实例感知图提示学习,旨在为不同输入实例生成定制化的独特提示。该过程通过轻量级架构为每个实例生成中间提示,利用可训练的码本向量对这些提示进行量化,并采用指数移动平均技术以确保训练稳定性。在多个数据集和设置下进行的大量实验表明,实例感知图提示学习相较于现有先进基线方法具有更优越的性能。