Self-supervised learning (SSL) has recently attracted significant attention in the field of recommender systems. Contrastive learning (CL) stands out as a major SSL paradigm due to its robust ability to generate self-supervised signals. Mainstream graph contrastive learning (GCL)-based methods typically implement CL by creating contrastive views through various data augmentation techniques. Despite these methods are effective, we argue that there still exist several challenges. i) Data augmentation (e.g., discarding edges or adding noise) necessitates additional graph convolution (GCN) or modeling operations, which are highly time-consuming and potentially harm the embedding quality. ii) Existing CL-based methods use traditional CL objectives to capture self-supervised signals. However, few studies have explored obtaining CL objectives from more perspectives and have attempted to fuse the varying signals from these CL objectives to enhance recommendation performance. To overcome these challenges, we propose a High-order Fusion Graph Contrastive Learning (HFGCL) framework for recommendation. Specifically, instead of facilitating data augmentations, we use high-order information from GCN process to create contrastive views. Additionally, to integrate self-supervised signals from various CL objectives, we propose an advanced CL objective. By ensuring that positive pairs are distanced from negative samples derived from both contrastive views, we effectively fuse self-supervised signals from distinct CL objectives, thereby enhancing the mutual information between positive pairs. Experimental results on three public datasets demonstrate the superior recommendation performance and efficiency of HFGCL compared to the state-of-the-art baselines.
翻译:自监督学习(SSL)在推荐系统领域近期受到广泛关注。对比学习(CL)因其生成自监督信号的强大能力,成为SSL的主要范式之一。主流的基于图对比学习(GCL)的方法通常通过多种数据增强技术构建对比视图来实现CL。尽管这些方法有效,我们认为仍存在若干挑战。i) 数据增强(如丢弃边或添加噪声)需要额外的图卷积(GCN)或建模操作,这些操作耗时严重且可能损害嵌入质量。ii) 现有基于CL的方法使用传统CL目标捕获自监督信号。然而,鲜有研究探索从更多视角获取CL目标,并尝试融合这些CL目标产生的不同信号以提升推荐性能。为克服这些挑战,我们提出一种用于推荐的高阶融合图对比学习(HFGCL)框架。具体而言,我们不再依赖数据增强,而是利用GCN过程产生的高阶信息构建对比视图。此外,为整合来自不同CL目标的自监督信号,我们提出一种改进的CL目标。通过确保正样本对远离来自两个对比视图的负样本,我们有效融合了不同CL目标的自监督信号,从而增强了正样本对之间的互信息。在三个公开数据集上的实验结果表明,相较于现有先进基线方法,HFGCL具有更优的推荐性能与效率。