Contrastive learning has been effectively applied to alleviate the data sparsity issue and enhance recommendation performance.The majority of existing methods employ random augmentation to generate augmented views of original sequences. The learning objective then aims to minimize the distance between representations of different views for the same user. However, these random augmentation strategies (e.g., mask or substitution) neglect the semantic consistency of different augmented views for the same user, leading to semantically inconsistent sequences with similar representations. Furthermore, most augmentation methods fail to utilize context information, which is critical for understanding sequence semantics. To address these limitations, we introduce a diffusion-based contrastive learning approach for sequential recommendation. Specifically, given a user sequence, we first select some positions and then leverage context information to guide the generation of alternative items via a guided diffusion model. By repeating this approach, we can get semantically consistent augmented views for the same user, which are used to improve the effectiveness of contrastive learning. To maintain cohesion between the representation spaces of both the diffusion model and the recommendation model, we train the entire framework in an end-to-end fashion with shared item embeddings. Extensive experiments on five benchmark datasets demonstrate the superiority of our proposed method.
翻译:对比学习已被有效应用于缓解数据稀疏性问题并提升推荐性能。现有方法大多采用随机增强方式为原始序列生成增强视图,其学习目标旨在最小化同一用户不同视图表示之间的距离。然而,这些随机增强策略(如掩码或替换)忽视了同一用户不同增强视图之间的语义一致性,导致语义不一致的序列产生相似的表示。此外,多数增强方法未能利用对理解序列语义至关重要的上下文信息。为解决这些局限性,我们提出了一种基于扩散的对比学习方法用于序列推荐。具体而言,给定用户序列后,我们首先选择若干位置,然后利用上下文信息通过引导扩散模型生成替代物品。重复此过程可获得同一用户语义一致的增强视图,从而提升对比学习的有效性。为保持扩散模型与推荐模型表示空间之间的连贯性,我们采用共享物品嵌入以端到端方式训练整个框架。在五个基准数据集上的大量实验证明了所提方法的优越性。