Spectral Graph Neural Networks have demonstrated superior performance in graph representation learning. However, many current methods focus on employing shared polynomial coefficients for all nodes, i.e., learning node-unified filters, which limits the filters' flexibility for node-level tasks. The recent DSF attempts to overcome this limitation by learning node-wise coefficients based on positional encoding. However, the initialization and updating process of the positional encoding are burdensome, hindering scalability on large-scale graphs. In this work, we propose a scalable node-wise filter, PolyAttn. Leveraging the attention mechanism, PolyAttn can directly learn node-wise filters in an efficient manner, offering powerful representation capabilities. Building on PolyAttn, we introduce the whole model, named PolyFormer. In the lens of Graph Transformer models, PolyFormer, which calculates attention scores within nodes, shows great scalability. Moreover, the model captures spectral information, enhancing expressiveness while maintaining efficiency. With these advantages, PolyFormer offers a desirable balance between scalability and expressiveness for node-level tasks. Extensive experiments demonstrate that our proposed methods excel at learning arbitrary node-wise filters, showing superior performance on both homophilic and heterophilic graphs, and handling graphs containing up to 100 million nodes. The code is available at https://github.com/air029/PolyFormer.
翻译:谱图神经网络在图表示学习中展现出卓越性能。然而,当前多数方法侧重于对所有节点采用共享多项式系数,即学习节点统一的滤波器,这限制了滤波器在节点级任务中的灵活性。近期提出的DSF方法尝试基于位置编码学习节点级系数以克服此限制,但位置编码的初始化与更新过程计算负担较重,阻碍了其在大规模图上的可扩展性。本研究提出一种可扩展的节点级滤波器PolyAttn。该机制借助注意力模型,能够以高效方式直接学习节点级滤波器,提供强大的表示能力。基于PolyAttn,我们构建了完整模型PolyFormer。从图Transformer模型视角看,PolyFormer在节点内部计算注意力分数,展现出优异的可扩展性。此外,该模型能捕获谱信息,在保持效率的同时增强表达能力。凭借这些优势,PolyFormer为节点级任务提供了可扩展性与表达力之间的理想平衡。大量实验表明,所提方法能够出色学习任意节点级滤波器,在同配图与异配图上均表现出优越性能,并可处理包含高达1亿个节点的大规模图。代码已开源:https://github.com/air029/PolyFormer。