Graph representation learning (GRL) makes considerable progress recently, which encodes graphs with topological structures into low-dimensional embeddings. Meanwhile, the time-consuming and costly process of annotating graph labels manually prompts the growth of self-supervised learning (SSL) techniques. As a dominant approach of SSL, Contrastive learning (CL) learns discriminative representations by differentiating between positive and negative samples. However, when applied to graph data, it overemphasizes global patterns while neglecting local structures. To tackle the above issue, we propose \underline{Local}-aware \underline{G}raph \underline{C}ontrastive \underline{L}earning (\textbf{\methnametrim}), a self-supervised learning framework that supplementarily captures local graph information with masking-based modeling compared with vanilla contrastive learning. Extensive experiments validate the superiority of \methname against state-of-the-art methods, demonstrating its promise as a comprehensive graph representation learner.
翻译:图表示学习(GRL)近年来取得了显著进展,能将具有拓扑结构的图编码为低维嵌入。然而,手动标注图标签的耗时与昂贵成本推动了自监督学习(SSL)技术的发展。作为SSL中的主流方法,对比学习(CL)通过区分正负样本来学习判别性表示。但在应用于图数据时,对比学习过度强调全局模式而忽略了局部结构。为解决上述问题,我们提出局部感知图对比学习(LocalGCL)——一种自监督学习框架,相较于原始对比学习,通过基于掩码的建模补充性地捕获图局部信息。大量实验验证了LocalGCL相较于现有最优方法的优越性,彰显其作为综合性图表示学习器的潜力。