Graph Transformers (GTs) have made remarkable achievements in graph-level tasks. However, most existing works regard graph structures as a form of guidance or bias for enhancing node representations, which focuses on node-central perspectives and lacks explicit representations of edges and structures. One natural question arises as to whether we can leverage a hypernode to represent some structures. Through experimental analysis, we explore the feasibility of this assumption. Based on our findings, we propose an efficient Loop and Clique Coarsening algorithm with linear complexity for Graph Classification (LCC4GC) on GT architecture. Specifically, we build three unique views, original, coarsening, and conversion, to learn a thorough structural representation. We compress loops and cliques via hierarchical heuristic graph coarsening and restrict them with well-designed constraints, which builds the coarsening view to learn high-level interactions between structures. We also introduce line graphs for edge embeddings and switch to edge-central perspective to alleviate the impact of coarsening reduction. Experiments on eight real-world datasets demonstrate the improvements of LCC4GC over 31 baselines from various architectures.
翻译:图Transformer(GTs)在图级任务中取得了显著成就。然而,现有研究大多将图结构视为增强节点表示的一种引导或偏置形式,这种节点中心视角缺乏对边与结构的显式表征。一个自然的问题是:我们能否利用超节点来表示某些结构?通过实验分析,我们探讨了这一假设的可行性。基于研究发现,我们提出了一种适用于图分类的线性复杂度高效环路与团块粗化算法(LCC4GC),并基于GT架构实现。具体而言,我们构建了原始视图、粗化视图与转换视图三种独特视角,以学习全面的结构表征。我们通过分层启发式图粗化技术压缩环路与团块,并采用精心设计的约束条件进行限制,从而构建粗化视图以学习结构间的高层交互。同时,我们引入线图进行边嵌入表示,并切换至边中心视角以缓解粗化缩减带来的影响。在八个真实数据集上的实验表明,LCC4GC在涵盖多种架构的31个基线模型上均取得了性能提升。