Large language models (LLMs) have demonstrated strong reasoning capabilities through step-by-step chain-of-thought (CoT) reasoning. Nevertheless, at the limits of model capability, CoT often proves insufficient, and its strictly sequential nature constrains test-time scalability. A potential alternative is divide-and-conquer (DAC) reasoning, which decomposes a complex problem into subproblems to facilitate more effective exploration of the solution. Although promising, our analysis reveals a fundamental misalignment between general-purpose post-training and DAC-style inference, which limits the model's capacity to fully leverage this potential. To bridge this gap and fully unlock LLMs' reasoning capabilities on the most challenging tasks, we propose an end-to-end reinforcement learning (RL) framework to enhance their DAC-style reasoning capacity. At each step, the policy decomposes a problem into a group of subproblems, solves them sequentially, and addresses the original one conditioned on the subproblem solutions, with both decomposition and solution integrated into RL training. Under comparable training, our DAC-style framework endows the model with a higher performance ceiling and stronger test-time scalability, surpassing CoT by 8.6% in Pass@1 and 6.3% in Pass@32 on competition-level benchmarks.
翻译:大型语言模型(LLM)通过逐步的思维链推理展现出强大的推理能力。然而,在模型能力的极限情况下,思维链推理往往显得不足,且其严格的顺序特性限制了测试时的扩展性。一种潜在的替代方案是分治理性,它将复杂问题分解为子问题,以促进更有效的解决方案探索。尽管前景广阔,我们的分析揭示了通用后训练与分治式推理之间存在根本性的错位,这限制了模型充分利用这一潜力的能力。为弥合这一差距并充分释放LLM在最具挑战性任务上的推理能力,我们提出了一个端到端的强化学习框架来增强其分治式推理能力。在每一步中,策略将问题分解为一组子问题,依次求解它们,并根据子问题的解来处理原始问题,其中分解和求解均整合到强化学习训练中。在可比较的训练条件下,我们的分治式框架赋予模型更高的性能上限和更强的测试时扩展性,在竞赛级基准测试中,Pass@1和Pass@32分别超越思维链推理8.6%和6.3%。