Personalization in Question Answering (QA) requires answers that are both accurate and aligned with users' background, preferences, and historical context. Existing state-of-the-art methods primarily rely on retrieval-augmented generation (RAG) solutions that construct personal context by retrieving relevant items from the user's profile. Existing methods use the user's query directly to retrieve personal documents, and such strategies often lead to surface-level personalization. We propose PR2 (Personalized Retrieval-Augmented Reasoning), a reinforcement learning framework that integrates reasoning and retrieval from personal context for personalization. PR2 learns adaptive retrieval-reasoning policies, determining when to retrieve, what evidence to retrieve from user profiles, and how to incorporate it into intermediate reasoning steps. By optimizing multi-turn reasoning trajectories under a personalized reward function, the framework reinforces reasoning paths that better align with user-specific preferences and contextual signals reflected by the reward model. Extensive experiments on the LaMP-QA benchmark using three LLMs show that PR2 consistently outperforms strong baselines, achieving an average relative improvement of 8.8%-12% in personalized QA.
翻译:问答系统(QA)中的个性化要求答案不仅准确,还需与用户的背景、偏好及历史上下文保持一致。现有最先进方法主要依赖检索增强生成(RAG)方案,通过从用户档案中检索相关条目来构建个人上下文。现有方法直接使用用户查询检索个人文档,此类策略常导致表面化的个性化。我们提出PR2(个性化检索增强推理),一种将个人上下文的推理与检索相集成的强化学习框架,以实现个性化。PR2学习自适应的检索-推理策略,决定何时检索、从用户档案中检索何种证据,以及如何将其融入中间推理步骤。通过在个性化奖励函数下优化多轮推理轨迹,该框架强化那些更贴合用户特定偏好及奖励模型所反映的上下文信号的推理路径。在LaMP-QA基准上使用三种大型语言模型(LLM)进行的广泛实验表明,PR2持续优于强基线方法,在个性化问答中实现了平均8.8%-12%的相对性能提升。