Self-supervised learning (SSL) has recently attracted significant attention in the field of recommender systems. Contrastive learning (CL) stands out as a major SSL paradigm due to its robust ability to generate self-supervised signals. Mainstream graph contrastive learning (GCL)-based methods typically implement CL by creating contrastive views through various data augmentation techniques. Despite these methods are effective, we argue that there still exist several challenges: i) Data augmentation (e.g., discarding edges or adding noise) necessitates additional graph convolution (GCN) or modeling operations, which are highly time-consuming and potentially harm the embedding quality. ii) Existing CL-based methods use traditional CL objectives to capture self-supervised signals. However, few studies have explored obtaining CL objectives from more perspectives and have attempted to fuse the varying signals from these CL objectives to enhance recommendation performance. To overcome these challenges, we propose a High-Order Fusion Graph Contrastive Learning (HFGCL) framework for recommendation. Specifically, we discards the data augmentations and instead high-order information from GCN process to create contrastive views. Additionally, to integrate self-supervised signals from various CL objectives, we propose an advanced CL objective. By ensuring that positive pairs are distanced from negative samples derived from both contrastive views, we effectively fuse self-supervised signals from distinct CL objectives, thereby enhancing the mutual information between positive pairs. Experimental results on three public datasets demonstrate the superior effectiveness of HFGCL compared to the state-of-the-art baselines.
翻译:自监督学习在推荐系统领域近期获得了显著关注。对比学习因其生成自监督信号的强大能力而成为主要的自监督学习范式。主流的基于图对比学习的方法通常通过多种数据增强技术创建对比视图来实现对比学习。尽管这些方法有效,我们认为仍存在若干挑战:i) 数据增强(如丢弃边或添加噪声)需要额外的图卷积或建模操作,这些操作耗时严重且可能损害嵌入质量。ii) 现有基于对比学习的方法使用传统对比学习目标来捕获自监督信号。然而,很少有研究探索从更多视角获取对比学习目标,并尝试融合这些不同对比学习目标产生的信号以提升推荐性能。为克服这些挑战,我们提出了一种用于推荐的高阶融合图对比学习框架。具体而言,我们摒弃数据增强,转而利用图卷积过程中的高阶信息来创建对比视图。此外,为整合来自不同对比学习目标的自监督信号,我们提出了一种先进的对比学习目标。通过确保正样本对远离来自两个对比视图的负样本,我们有效融合了不同对比学习目标的自监督信号,从而增强了正样本对之间的互信息。在三个公开数据集上的实验结果表明,相较于最先进的基线模型,高阶融合图对比学习框架具有更优越的性能。