We present ReHub, a novel graph transformer architecture that achieves linear complexity through an efficient reassignment technique between nodes and virtual nodes. Graph transformers have become increasingly important in graph learning for their ability to utilize long-range node communication explicitly, addressing limitations such as oversmoothing and oversquashing found in message-passing graph networks. However, their dense attention mechanism scales quadratically with the number of nodes, limiting their applicability to large-scale graphs. ReHub draws inspiration from the airline industry's hub-and-spoke model, where flights are assigned to optimize operational efficiency. In our approach, graph nodes (spokes) are dynamically reassigned to a fixed number of virtual nodes (hubs) at each model layer. Recent work, Neural Atoms (Li et al., 2024), has demonstrated impressive and consistent improvements over GNN baselines by utilizing such virtual nodes; their findings suggest that the number of hubs strongly influences performance. However, increasing the number of hubs typically raises complexity, requiring a trade-off to maintain linear complexity. Our key insight is that each node only needs to interact with a small subset of hubs to achieve linear complexity, even when the total number of hubs is large. To leverage all hubs without incurring additional computational costs, we propose a simple yet effective adaptive reassignment technique based on hub-hub similarity scores, eliminating the need for expensive node-hub computations. Our experiments on LRGB indicate a consistent improvement in results over the base method, Neural Atoms, while maintaining a linear complexity. Remarkably, our sparse model achieves performance on par with its non-sparse counterpart. Furthermore, ReHub outperforms competitive baselines and consistently ranks among top performers across various benchmarks.
翻译:本文提出ReHub,一种新颖的图Transformer架构,通过节点与虚拟节点之间的高效重分配技术实现线性复杂度。图Transformer因其能够显式利用远程节点通信的能力,在解决消息传递图网络中存在的过度平滑和过度挤压等局限性方面,已在图学习中变得日益重要。然而,其稠密注意力机制的计算复杂度随节点数量呈二次方增长,限制了其在大规模图上的适用性。ReHub从航空业的枢纽-辐条模型中汲取灵感,该模型通过优化航班分配来提升运营效率。在我们的方法中,图节点(辐条)在每一模型层被动态重分配到固定数量的虚拟节点(枢纽)。近期工作Neural Atoms(Li等人,2024)通过利用此类虚拟节点,已在图神经网络基线上展现出显著且一致的性能提升;其研究结果表明枢纽数量对性能具有重要影响。然而,增加枢纽数量通常会导致复杂度上升,需要权衡以保持线性复杂度。我们的核心洞见是:即使枢纽总数很大,每个节点也仅需与少量枢纽子集进行交互即可实现线性复杂度。为在不引入额外计算成本的前提下充分利用所有枢纽,我们提出一种基于枢纽-枢纽相似度分数的简单而有效的自适应重分配技术,从而避免了昂贵的节点-枢纽计算。我们在LRGB基准上的实验表明,该方法在保持线性复杂度的同时,较基础方法Neural Atoms取得了持续的性能提升。值得注意的是,我们的稀疏模型达到了与非稀疏模型相当的性能。此外,ReHub在多个基准测试中均优于竞争基线方法,并持续位列最佳性能模型之列。