The explainability of recommendation systems is crucial for enhancing user trust and satisfaction. Leveraging large language models (LLMs) offers new opportunities for comprehensive recommendation logic generation. However, in existing related studies, fine-tuning LLM models for recommendation tasks incurs high computational costs and alignment issues with existing systems, limiting the application potential of proven proprietary/closed-source LLM models, such as GPT-4. In this work, our proposed effective strategy LANE aligns LLMs with online recommendation systems without additional LLMs tuning, reducing costs and improving explainability. This innovative approach addresses key challenges in integrating language models with recommendation systems while fully utilizing the capabilities of powerful proprietary models. Specifically, our strategy operates through several key components: semantic embedding, user multi-preference extraction using zero-shot prompting, semantic alignment, and explainable recommendation generation using Chain of Thought (CoT) prompting. By embedding item titles instead of IDs and utilizing multi-head attention mechanisms, our approach aligns the semantic features of user preferences with those of candidate items, ensuring coherent and user-aligned recommendations. Sufficient experimental results including performance comparison, questionnaire voting, and visualization cases prove that our method can not only ensure recommendation performance, but also provide easy-to-understand and reasonable recommendation logic.
翻译:推荐系统的可解释性对于增强用户信任和满意度至关重要。利用大语言模型为生成全面的推荐逻辑提供了新的机遇。然而,在现有的相关研究中,为推荐任务微调LLM模型会产生高昂的计算成本以及与现有系统的对齐问题,这限制了经过验证的专有/闭源LLM模型(如GPT-4)的应用潜力。在本工作中,我们提出的有效策略LANE将LLM与在线推荐系统对齐,而无需额外的LLM微调,从而降低成本并提高可解释性。这种创新方法解决了将语言模型与推荐系统集成的关键挑战,同时充分利用了强大专有模型的能力。具体而言,我们的策略通过几个关键组件运作:语义嵌入、使用零样本提示的用户多偏好提取、语义对齐以及使用思维链提示生成可解释的推荐。通过嵌入物品标题而非ID并利用多头注意力机制,我们的方法将用户偏好的语义特征与候选物品的语义特征对齐,从而确保推荐的一致性和与用户的对齐。充分的实验结果,包括性能比较、问卷调查投票和可视化案例,证明我们的方法不仅能保证推荐性能,还能提供易于理解且合理的推荐逻辑。