Designing effective prompts can empower LLMs to understand user preferences and provide recommendations by leveraging LLMs' intent comprehension and knowledge utilization capabilities. However, existing research predominantly concentrates on task-wise prompting, developing fixed prompt templates composed of four patterns (i.e., role-playing, history records, reasoning guidance, and output format) and applying them to all users for a given task. Although convenient, task-wise prompting overlooks individual user differences, leading to potential mismatches in capturing user preferences. To address it, we introduce the concept of instance-wise prompting to personalize discrete prompts for individual users and propose Reinforced Prompt Personalization (RPP) to optimize the four patterns in prompts using multi-agent reinforcement learning (MARL). To boost efficiency, RPP formulates prompt personalization as selecting optimal sentences holistically across the four patterns, rather than optimizing word-by-word. To ensure the quality of prompts, RPP meticulously crafts diverse expressions for each of the four patterns, considering multiple analytical perspectives for specific recommendation tasks. In addition to RPP, our proposal of RPP+ aims to enhance the scalability of action space by dynamically refining actions with LLMs throughout the iterative process. We evaluate the effectiveness of RPP/RPP+ in ranking tasks over various datasets. Experimental results demonstrate the superiority of RPP/RPP+ over traditional recommender models, few-shot methods, and other prompt-based methods, underscoring the significance of instance-wise prompting for LLMs in recommendation tasks and validating the effectiveness of RPP/RPP+. Our code is available at https://github.com/maowenyu-11/RPP.
翻译:设计有效的提示能够使大语言模型(LLMs)通过利用其意图理解与知识运用能力,理解用户偏好并提供推荐。然而,现有研究主要集中于任务级提示设计,即针对给定任务构建由四种模式(即角色扮演、历史记录、推理引导和输出格式)组成的固定提示模板,并将其应用于所有用户。尽管这种方式较为便捷,但任务级提示忽视了用户个体差异,可能导致捕捉用户偏好时出现错配。为解决这一问题,我们提出了实例级提示的概念,旨在为个体用户个性化定制离散提示,并提出了强化提示个性化方法,利用多智能体强化学习优化提示中的四种模式。为提高效率,RPP将提示个性化问题构建为跨四种模式整体选择最优句子的过程,而非逐词优化。为确保提示质量,RPP针对特定推荐任务,从多角度分析考量,为四种模式精心设计了多样化的表达方式。除RPP外,我们还提出了RPP+方案,通过在迭代过程中动态使用大语言模型精炼动作,以增强动作空间的可扩展性。我们在多个数据集上评估了RPP/RPP+在排序任务中的有效性。实验结果表明,RPP/RPP+相较于传统推荐模型、少样本方法及其他基于提示的方法具有显著优势,这印证了实例级提示在大语言模型推荐任务中的重要性,并验证了RPP/RPP+的有效性。相关代码已发布于https://github.com/maowenyu-11/RPP。