Traditional recommendation systems are subject to a strong feedback loop by learning from and reinforcing past user-item interactions, which in turn limits the discovery of novel user interests. To address this, we introduce a hybrid hierarchical framework combining Large Language Models (LLMs) and classic recommendation models for user interest exploration. The framework controls the interfacing between the LLMs and the classic recommendation models through "interest clusters", the granularity of which can be explicitly determined by algorithm designers. It recommends the next novel interests by first representing "interest clusters" using language, and employs a fine-tuned LLM to generate novel interest descriptions that are strictly within these predefined clusters. At the low level, it grounds these generated interests to an item-level policy by restricting classic recommendation models, in this case a transformer-based sequence recommender to return items that fall within the novel clusters generated at the high level. We showcase the efficacy of this approach on an industrial-scale commercial platform serving billions of users. Live experiments show a significant increase in both exploration of novel interests and overall user enjoyment of the platform.
翻译:传统推荐系统通过学习和强化历史用户-物品交互,陷入强烈的反馈循环,从而限制了新颖用户兴趣的发现。为解决此问题,本文提出一种结合大语言模型(LLMs)与经典推荐模型的混合分层框架,用于用户兴趣探索。该框架通过"兴趣簇"控制LLMs与经典推荐模型之间的交互接口,其粒度可由算法设计者显式确定。该框架通过以下方式推荐下一个新颖兴趣:首先使用语言表示"兴趣簇",并采用微调后的LLM生成严格限定于这些预定义簇内的新颖兴趣描述。在底层,通过约束经典推荐模型(本文采用基于Transformer的序列推荐器),将生成的新颖兴趣落实到物品级策略,确保返回的物品属于高层生成的新颖兴趣簇范畴。我们在服务数十亿用户的工业级商业平台上验证了该方法的有效性。在线实验表明,该方法在提升新颖兴趣探索和整体用户平台满意度方面均取得显著成效。