The rise of Large Language Models (LLMs), such as LLaMA and ChatGPT, has opened new opportunities for enhancing recommender systems through improved explainability. This paper provides a systematic literature review focused on leveraging LLMs to generate explanations for recommendations -- a critical aspect for fostering transparency and user trust. We conducted a comprehensive search within the ACM Guide to Computing Literature, covering publications from the launch of ChatGPT (November 2022) to the present (November 2024). Our search yielded 232 articles, but after applying inclusion criteria, only six were identified as directly addressing the use of LLMs in explaining recommendations. This scarcity highlights that, despite the rise of LLMs, their application in explainable recommender systems is still in an early stage. We analyze these select studies to understand current methodologies, identify challenges, and suggest directions for future research. Our findings underscore the potential of LLMs improving explanations of recommender systems and encourage the development of more transparent and user-centric recommendation explanation solutions.
翻译:以LLaMA和ChatGPT为代表的大语言模型(LLMs)的兴起,为通过提升可解释性来增强推荐系统开辟了新的机遇。本文聚焦于利用LLMs生成推荐解释——这一对于促进透明度和建立用户信任至关重要的方面——进行了系统的文献综述。我们在ACM计算文献指南中进行了全面检索,覆盖了从ChatGPT发布(2022年11月)至今(2024年11月)的出版物。我们的检索共获得232篇文章,但在应用纳入标准后,仅有六篇被确定为直接涉及使用LLMs解释推荐。这一稀缺性表明,尽管LLMs兴起,但其在可解释推荐系统中的应用仍处于早期阶段。我们分析了这些精选研究,以理解当前的方法论、识别挑战,并为未来研究方向提出建议。我们的研究结果突显了LLMs在改进推荐系统解释方面的潜力,并鼓励开发更透明、更以用户为中心的推荐解释解决方案。