Recommender systems (RecSys) have become critical tools for enhancing user engagement by delivering personalized content across diverse digital platforms. Recent advancements in large language models (LLMs) demonstrate significant potential for improving RecSys, primarily due to their exceptional generalization capabilities and sophisticated contextual understanding, which facilitate the generation of flexible and interpretable recommendations. However, the direct deployment of LLMs as primary recommendation policies presents notable challenges, including persistent latency issues stemming from frequent API calls and inherent model limitations such as hallucinations and biases. To address these issues, this paper proposes a novel offline reinforcement learning (RL) framework that leverages imitation learning from LLM-generated trajectories. Specifically, inverse reinforcement learning is employed to extract robust reward models from LLM demonstrations. This approach negates the need for LLM fine-tuning, thereby substantially reducing computational overhead. Simultaneously, the RL policy is guided by the cumulative rewards derived from these demonstrations, effectively transferring the semantic insights captured by the LLM. Comprehensive experiments conducted on two benchmark datasets validate the effectiveness of the proposed method, demonstrating superior performance when compared against state-of-the-art RL-based and in-context learning baselines. The code can be found at https://github.com/ArronDZhang/IL-Rec.
翻译:推荐系统(RecSys)已成为提升用户参与度的关键工具,通过在多样化数字平台中提供个性化内容。大型语言模型(LLM)的最新进展显示出改进推荐系统的巨大潜力,这主要归功于其卓越的泛化能力和复杂的上下文理解,有助于生成灵活且可解释的推荐。然而,直接将LLM作为主要推荐策略部署存在显著挑战,包括频繁API调用导致的持续延迟问题,以及模型固有的幻觉和偏见等局限性。为解决这些问题,本文提出一种新颖的离线强化学习(RL)框架,该框架利用从LLM生成轨迹中进行的模仿学习。具体而言,采用逆强化学习从LLM示范中提取稳健的奖励模型。这种方法无需对LLM进行微调,从而大幅降低计算开销。同时,RL策略受到这些示范所衍生的累积奖励的引导,有效传递了LLM捕获的语义洞察。在两个基准数据集上进行的全面实验验证了所提方法的有效性,相较于最先进的基于RL和上下文学习的基线方法,其表现出更优的性能。代码可在https://github.com/ArronDZhang/IL-Rec获取。