Complex dialog systems often use retrieved evidence to facilitate factual responses. Such RAG (Retrieval Augmented Generation) systems retrieve from massive heterogeneous data stores that are usually architected as multiple indexes or APIs instead of a single monolithic source. For a given query, relevant evidence needs to be retrieved from one or a small subset of possible retrieval sources. Complex queries can even require multi-step retrieval. For example, a conversational agent on a retail site answering customer questions about past orders will need to retrieve the appropriate customer order first and then the evidence relevant to the customer's question in the context of the ordered product. Most RAG Agents handle such Chain-of-Thought (CoT) tasks by interleaving reasoning and retrieval steps. However, each reasoning step directly adds to the latency of the system. For large models (>100B parameters) this latency cost is significant -- in the order of multiple seconds. Multi-agent systems may classify the query to a single Agent associated with a retrieval source, though this means that a (small) classification model dictates the performance of a large language model. In this work we present REAPER (REAsoning-based PlannER) - an LLM based planner to generate retrieval plans in conversational systems. We show significant gains in latency over Agent-based systems and are able to scale easily to new and unseen use cases as compared to classification-based planning. Though our method can be applied to any RAG system, we show our results in the context of Rufus -- Amazon's conversational shopping assistant.
翻译:复杂对话系统常借助检索证据来生成事实性回应。此类检索增强生成(RAG)系统通常从海量异构数据存储中检索信息,这些数据存储通常被设计为多个索引或API,而非单一整体数据源。针对给定查询,需从一个或少数可能的检索源中获取相关证据。复杂查询甚至需要多步检索。例如,零售网站的对话代理在回答客户关于历史订单的问题时,需先检索对应的客户订单,再在已订购产品的上下文中检索与客户问题相关的证据。多数RAG代理通过交错执行推理与检索步骤来处理此类思维链任务,但每个推理步骤都会直接增加系统延迟。对于大型模型(参数规模>100B),这种延迟代价尤为显著——可达数秒量级。多代理系统虽可将查询分类至关联特定检索源的单个代理,但这意味着(小型)分类模型决定了大型语言模型的性能表现。本研究提出REAPER(基于推理的规划器)——一种基于LLM的规划器,用于在对话系统中生成检索规划。实验表明,相较于基于代理的系统,本方法在延迟方面取得显著改善,且与基于分类的规划相比,能轻松扩展至新的未见用例。尽管该方法可应用于任何RAG系统,我们在亚马逊的对话购物助手Rufus的应用场景中展示了实验结果。