Tool-Integrated Reasoning has emerged as a key paradigm to augment Large Language Models (LLMs) with computational capabilities, yet integrating tool-use into long Chain-of-Thought (long CoT) remains underexplored, largely due to the scarcity of training data and the challenge of integrating tool-use without compromising the model's intrinsic long-chain reasoning. In this paper, we introduce DART (Discovery And Reinforcement of Tool-Integrated Reasoning Chains via Rollout Trees), a reinforcement learning framework that enables spontaneous tool-use during long CoT reasoning without human annotation. DART operates by constructing dynamic rollout trees during training to discover valid tool-use opportunities, branching out at promising positions to explore diverse tool-integrated trajectories. Subsequently, a tree-based process advantage estimation identifies and credits specific sub-trajectories where tool invocation positively contributes to the solution, effectively reinforcing these beneficial behaviors. Extensive experiments on challenging benchmarks like AIME and GPQA-Diamond demonstrate that DART significantly outperforms existing methods, successfully harmonizing tool execution with long CoT reasoning.
翻译:工具集成推理已成为增强大型语言模型(LLM)计算能力的关键范式,然而将工具使用整合到长链思维(长 CoT)中仍未得到充分探索,这主要由于训练数据的稀缺性以及在不损害模型固有长链推理能力的前提下集成工具使用的挑战。本文提出 DART(基于展开树的工具集成推理链发现与强化),这是一种无需人工标注即可在长 CoT 推理过程中实现自主工具使用的强化学习框架。DART 通过在训练过程中构建动态展开树来发现有效的工具使用机会,在具有潜力的位置进行分支以探索多样化的工具集成轨迹。随后,基于树的优势估计过程识别并奖励那些工具调用对解决方案产生积极贡献的特定子轨迹,从而有效强化这些有益行为。在 AIME 和 GPQA-Diamond 等具有挑战性的基准测试上进行的大量实验表明,DART 显著优于现有方法,成功实现了工具执行与长 CoT 推理的协同。