Large language models (LLMs) have attracted significant attention in recommendation systems. Current LLM-based recommender systems primarily rely on supervised fine-tuning (SFT) to train the model for recommendation tasks. However, relying solely on positive samples limits the model's ability to align with user satisfaction and expectations. To address this, researchers have introduced Direct Preference Optimization (DPO), which explicitly aligns recommendations with user preferences using offline preference ranking data. Despite its advantages, our theoretical analysis reveals that DPO inherently biases the model towards a few items, exacerbating the filter bubble issue and ultimately degrading user experience. In this paper, we propose SPRec, a novel self-play recommendation framework designed to mitigate over-recommendation and improve fairness without requiring additional data or manual intervention. In each self-play iteration, the model undergoes an SFT step followed by a DPO step, treating offline interaction data as positive samples and the predicted outputs from the previous iteration as negative samples. This effectively re-weights the DPO loss function using the model's logits, adaptively suppressing biased items. Extensive experiments on multiple real-world datasets demonstrate SPRec's effectiveness in enhancing recommendation accuracy and addressing fairness concerns.
翻译:大语言模型(LLM)在推荐系统中已引起广泛关注。当前基于LLM的推荐系统主要依赖监督微调(SFT)来训练模型以完成推荐任务。然而,仅依赖正样本会限制模型与用户满意度和期望的对齐能力。为解决此问题,研究者引入了直接偏好优化(DPO),该方法利用离线偏好排序数据显式地将推荐与用户偏好对齐。尽管具有优势,我们的理论分析表明,DPO本质上会使模型偏向少数项目,加剧信息茧房问题,最终损害用户体验。本文提出SPRec,一种新颖的自博弈推荐框架,旨在缓解过度推荐并提升公平性,且无需额外数据或人工干预。在每一轮自博弈迭代中,模型先进行SFT步骤,随后进行DPO步骤,将离线交互数据视为正样本,将前一轮迭代的预测输出视为负样本。该方法通过模型的逻辑值有效地对DPO损失函数进行重加权,从而自适应地抑制有偏项目。在多个真实世界数据集上的大量实验证明了SPRec在提升推荐准确性和解决公平性问题方面的有效性。