Recommender systems (RecSys) play a vital role in online platforms, offering users personalized suggestions amidst vast information. Graph contrastive learning aims to learn from high-order collaborative filtering signals with unsupervised augmentation on the user-item bipartite graph, which predominantly relies on the multi-task learning framework involving both the pair-wise recommendation loss and the contrastive loss. This decoupled design can cause inconsistent optimization direction from different losses, which leads to longer convergence time and even sub-optimal performance. Besides, the self-supervised contrastive loss falls short in alleviating the data sparsity issue in RecSys as it learns to differentiate users/items from different views without providing extra supervised collaborative filtering signals during augmentations. In this paper, we propose Mixed Supervised Graph Contrastive Learning for Recommendation (MixSGCL) to address these concerns. MixSGCL originally integrates the training of recommendation and unsupervised contrastive losses into a supervised contrastive learning loss to align the two tasks within one optimization direction. To cope with the data sparsity issue, instead unsupervised augmentation, we further propose node-wise and edge-wise mixup to mine more direct supervised collaborative filtering signals based on existing user-item interactions. Extensive experiments on three real-world datasets demonstrate that MixSGCL surpasses state-of-the-art methods, achieving top performance on both accuracy and efficiency. It validates the effectiveness of MixSGCL with our coupled design on supervised graph contrastive learning.
翻译:推荐系统(RecSys)在在线平台中扮演着重要角色,能够在海量信息中为用户提供个性化建议。图对比学习通过在用户-物品二分图上进行无监督增强,学习高阶协同过滤信号,该过程主要依赖于同时包含成对推荐损失和对比损失的多任务学习框架。这种解耦设计会导致不同损失的优化方向不一致,从而延长收敛时间,甚至导致次优性能。此外,自监督对比损失在缓解RecSys中的数据稀疏性方面存在不足,因为它仅从不同视角区分用户/物品,而在增强过程中未提供额外的监督协同过滤信号。本文提出混合监督图对比学习推荐方法(MixSGCL)来解决这些问题。MixSGCL创新性地将推荐训练和无监督对比损失整合为监督对比学习损失,使两个任务在统一优化方向上对齐。为应对数据稀疏性问题,我们进一步提出节点级和边级混合增强方法,用于在现有用户-物品交互基础上挖掘更直接的监督协同过滤信号,替代无监督增强方案。在三个真实数据集上的大量实验表明,MixSGCL超越了现有最先进方法,在准确性和效率上均达到顶级性能。这验证了MixSGCL通过耦合监督图对比学习设计的有效性。