Large reasoning models such as DeepSeek-R1 and OpenAI o1 generate extended chains of thought spanning thousands of tokens, yet their integration with retrieval-augmented generation (RAG) remains fundamentally misaligned. Current RAG systems optimize for providing context before reasoning begins, while reasoning models require evidence injection during multi-step inference chains. We introduce ReaLM-Retrieve, a reasoning-aware retrieval framework that addresses this mismatch through three key innovations: (1) a step-level uncertainty detector that identifies knowledge gaps at reasoning-step granularity rather than token or sentence level; (2) a retrieval intervention policy that learns when external evidence maximally benefits ongoing reasoning; and (3) an efficiency-optimized integration mechanism that reduces per-retrieval overhead by 3.2x compared to naive integration. Experiments on MuSiQue, HotpotQA, and 2WikiMultiHopQA demonstrate that ReaLM-Retrieve achieves on average 10.1% absolute improvement in answer F1 over standard RAG (range: 9.0-11.8% across the three benchmarks) while reducing retrieval calls by 47% compared to fixed-interval approaches like IRCoT (all improvements significant at p<0.01, paired bootstrap). On the challenging MuSiQue benchmark requiring 2-4 hop reasoning, our method achieves 71.2% F1 with an average of only 1.8 retrieval calls per question. Analysis shows that ReaLM-Retrieve also improves retrieval quality itself, achieving 81.3% Recall@5 with consistently higher precision and MRR than fixed-interval baselines on supporting evidence, establishing new state-of-the-art efficiency-accuracy trade-offs for reasoning-intensive retrieval tasks.
翻译:大型推理模型(如DeepSeek-R1和OpenAI o1)可生成长达数千词元的扩展思维链,但其与检索增强生成(RAG)的集成存在根本性的不匹配问题。当前RAG系统优化在推理开始前提供上下文,而推理模型需要在多步推理链中注入证据信息。我们提出ReaLM-Retrieve,一种面向推理感知的检索框架,通过三项关键创新解决这一矛盾:(1)步骤级不确定性检测器,以推理步骤粒度(而非词元或句子级别)识别知识缺口;(2)检索干预策略,学习外部证据何时能最大程度地惠及进行中的推理;(3)效率优化的集成机制,相比朴素集成将每次检索开销降低3.2倍。在MuSiQue、HotpotQA和2WikiMultiHopQA上的实验表明,ReaLM-Retrieve在答案F1值上比标准RAG平均绝对提升10.1%(三个基准测试上提升范围为9.0-11.8%),同时相比IRCoT等固定间隔方法减少47%的检索调用(所有改进在p<0.01的配对自助法检验下均显著)。在需要2-4跳推理的挑战性MuSiQue基准上,我们的方法以每个问题平均仅1.8次检索调用实现了71.2%的F1值。分析表明,ReaLM-Retrieve还提升了检索质量本身,在支撑证据上达到81.3%的Recall@5,且精确率和MRR均持续高于固定间隔基线方法,为推理密集型检索任务建立了新的效率-准确率权衡标杆。