Graph-based models and contrastive learning have emerged as prominent methods in Collaborative Filtering (CF). While many existing models in CF incorporate these methods in their design, there seems to be a limited depth of analysis regarding the foundational principles behind them. This paper bridges graph convolution, a pivotal element of graph-based models, with contrastive learning through a theoretical framework. By examining the learning dynamics and equilibrium of the contrastive loss, we offer a fresh lens to understand contrastive learning via graph theory, emphasizing its capability to capture high-order connectivity. Building on this analysis, we further show that the graph convolutional layers often used in graph-based models are not essential for high-order connectivity modeling and might contribute to the risk of oversmoothing. Stemming from our findings, we introduce Simple Contrastive Collaborative Filtering (SCCF), a simple and effective algorithm based on a naive embedding model and a modified contrastive loss. The efficacy of the algorithm is demonstrated through extensive experiments across four public datasets. The experiment code is available at \url{https://github.com/wu1hong/SCCF}. \end{abstract}
翻译:基于图的模型和对比学习已成为协同过滤(CF)中的主流方法。尽管现有许多CF模型在设计时融合了这些方法,但对其背后基本原理的深度分析似乎有限。本文通过一个理论框架,将基于图模型的关键要素——图卷积,与对比学习联系起来。通过考察对比损失的学习动态与均衡,我们提供了一个基于图论理解对比学习的新视角,并强调其捕捉高阶连接性的能力。基于此分析,我们进一步指出,基于图模型中常用的图卷积层对于高阶连接性建模并非必需,且可能增加过度平滑的风险。基于研究发现,我们提出了简单对比协同过滤(SCCF),这是一种基于朴素嵌入模型和修正对比损失的简洁高效算法。通过在四个公开数据集上的大量实验验证了该算法的有效性。实验代码发布于 \url{https://github.com/wu1hong/SCCF}。