Graph contrastive learning (GCL) has become a hot topic in the field of graph representation learning. In contrast to traditional supervised learning relying on a large number of labels, GCL exploits augmentation strategies to generate multiple views and positive/negative pairs, both of which greatly influence the performance. Unfortunately, commonly used random augmentations may disturb the underlying semantics of graphs. Moreover, traditional GNNs, a type of widely employed encoders in GCL, are inevitably confronted with over-smoothing and over-squashing problems. To address these issues, we propose GNN-Transformer Cooperative Architecture for Trustworthy Graph Contrastive Learning (GTCA), which inherits the advantages of both GNN and Transformer, incorporating graph topology to obtain comprehensive graph representations. Theoretical analysis verifies the trustworthiness of the proposed method. Extensive experiments on benchmark datasets demonstrate state-of-the-art empirical performance.
翻译:图对比学习已成为图表示学习领域的热点课题。与依赖大量标签的传统监督学习不同,图对比学习通过增强策略生成多视图及正负样本对,这两者对性能均有显著影响。然而,常用的随机增强方法可能干扰图的底层语义。此外,作为图对比学习中广泛采用的编码器,传统图神经网络不可避免地面临过度平滑与过度压缩问题。为解决上述问题,本文提出基于图神经网络与Transformer协同架构的可信图对比学习方法,该方法继承图神经网络与Transformer的双重优势,通过融合图拓扑结构获得全面的图表示。理论分析验证了所提方法的可信性。在基准数据集上的大量实验表明,该方法取得了当前最优的实证性能。