This paper presents ReasoningRec, a reasoning-based recommendation framework that leverages Large Language Models (LLMs) to bridge the gap between recommendations and human-interpretable explanations. In contrast to conventional recommendation systems that rely on implicit user-item interactions, ReasoningRec employs LLMs to model users and items, focusing on preferences, aversions, and explanatory reasoning. The framework utilizes a larger LLM to generate synthetic explanations for user preferences, subsequently used to fine-tune a smaller LLM for enhanced recommendation accuracy and human-interpretable explanation. Our experimental study investigates the impact of reasoning and contextual information on personalized recommendations, revealing that the quality of contextual and personalized data significantly influences the LLM's capacity to generate plausible explanations. Empirical evaluations demonstrate that ReasoningRec surpasses state-of-the-art methods by up to 12.5\% in recommendation prediction while concurrently providing human-intelligible explanations. The code is available here: https://github.com/millenniumbismay/reasoningrec.
翻译:本文提出ReasoningRec,一种基于推理的推荐框架,利用大型语言模型(LLM)弥合推荐系统与人类可解释说明之间的鸿沟。与传统依赖隐式用户-物品交互的推荐系统不同,ReasoningRec运用LLM对用户和物品进行建模,重点关注偏好、厌恶及解释性推理。该框架采用较大规模的LLM生成用户偏好的合成解释,随后利用这些解释微调较小规模的LLM,以提升推荐准确性并生成人类可理解的说明。我们的实验研究探讨了推理与上下文信息对个性化推荐的影响,发现上下文及个性化数据的质量显著影响LLM生成合理解释的能力。实证评估表明,ReasoningRec在推荐预测任务中优于现有最优方法达12.5%,同时能够提供人类可理解的解释。代码已开源:https://github.com/millenniumbismay/reasoningrec。