Recommender systems are widely used in online services, with embedding-based models being particularly popular due to their expressiveness in representing complex signals. However, these models often function as a black box, making them less transparent and reliable for both users and developers. Recently, large language models (LLMs) have demonstrated remarkable intelligence in understanding, reasoning, and instruction following. This paper presents the initial exploration of using LLMs as surrogate models to explaining black-box recommender models. The primary concept involves training LLMs to comprehend and emulate the behavior of target recommender models. By leveraging LLMs' own extensive world knowledge and multi-step reasoning abilities, these aligned LLMs can serve as advanced surrogates, capable of reasoning about observations. Moreover, employing natural language as an interface allows for the creation of customizable explanations that can be adapted to individual user preferences. To facilitate an effective alignment, we introduce three methods: behavior alignment, intention alignment, and hybrid alignment. Behavior alignment operates in the language space, representing user preferences and item information as text to mimic the target model's behavior; intention alignment works in the latent space of the recommendation model, using user and item representations to understand the model's behavior; hybrid alignment combines both language and latent spaces. Comprehensive experiments conducted on three public datasets show that our approach yields promising results in understanding and mimicking target models, producing high-quality, high-fidelity, and distinct explanations. Our code is available at https://github.com/microsoft/RecAI.
翻译:推荐系统在在线服务中得到了广泛应用,其中基于嵌入的模型因其在表示复杂信号方面的强大表达能力而尤为流行。然而,这些模型通常作为黑箱运行,使其对用户和开发者而言缺乏透明度和可靠性。最近,大语言模型在理解、推理和指令遵循方面展现出了卓越的智能。本文首次探索了利用大语言模型作为替代模型来解释黑箱推荐模型。其核心思想是训练大语言模型以理解和模拟目标推荐模型的行为。通过利用大语言模型自身广泛的世界知识和多步推理能力,这些经过对齐的大语言模型可以作为高级替代模型,能够对观察结果进行推理。此外,使用自然语言作为接口,可以创建可定制的解释,以适应个体用户的偏好。为了实现有效的对齐,我们提出了三种方法:行为对齐、意图对齐和混合对齐。行为对齐在语言空间中进行,将用户偏好和物品信息表示为文本来模拟目标模型的行为;意图对齐在推荐模型的潜在空间中进行,利用用户和物品表示来理解模型的行为;混合对齐则结合了语言空间和潜在空间。在三个公开数据集上进行的全面实验表明,我们的方法在理解和模拟目标模型方面取得了有希望的结果,能够生成高质量、高保真且独特的解释。我们的代码可在 https://github.com/microsoft/RecAI 获取。