Frequently updating Large Language Model (LLM)-based recommender systems to adapt to new user interests -- as done for traditional ones -- is impractical due to high training costs, even with acceleration methods. This work explores adapting to dynamic user interests without any model updates by leveraging In-Context Learning (ICL), which allows LLMs to learn new tasks from few-shot examples provided in the input. Using new-interest examples as the ICL few-shot examples, LLMs may learn real-time interest directly, avoiding the need for model updates. However, existing LLM-based recommenders often lose the in-context learning ability during recommendation tuning, while the original LLM's in-context learning lacks recommendation-specific focus. To address this, we propose RecICL, which customizes recommendation-specific in-context learning for real-time recommendations. RecICL organizes training examples in an in-context learning format, ensuring that in-context learning ability is preserved and aligned with the recommendation task during tuning. Extensive experiments demonstrate RecICL's effectiveness in delivering real-time recommendations without requiring model updates. Our code is available at https://github.com/ym689/rec_icl.
翻译:频繁更新基于大语言模型(LLM)的推荐系统以适应新用户兴趣——如传统推荐系统所做的那样——即使采用加速方法,也会因高昂的训练成本而难以实现。本研究探索在不更新模型的情况下,通过利用上下文学习(ICL)来适应动态用户兴趣,该方法使LLM能够从输入中提供的少量示例中学习新任务。将新兴趣示例作为ICL的少量示例,LLM可以直接学习实时兴趣,从而避免模型更新的需求。然而,现有的基于LLM的推荐器在推荐调优过程中常常丧失上下文学习能力,而原始LLM的上下文学习又缺乏推荐任务的特异性。为解决这一问题,我们提出了RecICL,该方法为实时推荐定制了面向推荐任务的上下文学习。RecICL以上下文学习格式组织训练示例,确保在调优过程中上下文学习能力得以保留并与推荐任务对齐。大量实验证明RecICL能够在无需模型更新的情况下有效实现实时推荐。我们的代码可在https://github.com/ym689/rec_icl获取。