Recent advancements in recommender systems have focused on leveraging Large Language Models (LLMs) to improve user preference modeling, yielding promising outcomes. However, current LLM-based approaches struggle to fully leverage user behavior sequences, resulting in suboptimal preference modeling for personalized recommendations. In this study, we propose a novel Counterfactual Fine-Tuning (CFT) method to address this issue by explicitly emphasizing the role of behavior sequences when generating recommendations. Specifically, we employ counterfactual reasoning to identify the causal effects of behavior sequences on model output and introduce a task that directly fits the ground-truth labels based on these effects, achieving the goal of explicit emphasis. Additionally, we develop a token-level weighting mechanism to adjust the emphasis strength for different item tokens, reflecting the diminishing influence of behavior sequences from earlier to later tokens during predicting an item. Extensive experiments on real-world datasets demonstrate that CFT effectively improves behavior sequence modeling. Our codes are available at https://github.com/itsmeyjt/CFT.
翻译:近期推荐系统研究聚焦于利用大型语言模型(LLM)改进用户偏好建模,已取得显著进展。然而,现有基于LLM的方法难以充分利用用户行为序列,导致个性化推荐中的偏好建模效果欠佳。本研究提出一种新颖的反事实微调(CFT)方法,通过在生成推荐时显式强调行为序列的作用来解决该问题。具体而言,我们采用反事实推理识别行为序列对模型输出的因果效应,并基于这些效应构建直接拟合真实标签的任务,从而实现显式强调的目标。此外,我们设计了令牌级加权机制来调整不同物品令牌的强调强度,以反映预测物品时行为序列对早期至晚期令牌影响的衰减规律。在真实数据集上的大量实验表明,CFT能有效提升行为序列建模性能。相关代码已发布于https://github.com/itsmeyjt/CFT。