Reinforcement Learning with Verifiable Rewards (RLVR) has established itself as the dominant paradigm for instilling rigorous reasoning capabilities in Large Language Models. While effective at amplifying dominant behaviors, we identify a critical pathology in this alignment process: the systematic suppression of valid but rare (low-likelihood under the base model distribution) reasoning paths. We theoretically characterize this phenomenon as a "Normalization Squeeze," where the interplay between mode-seeking policy gradients and finite sampling acts as a high-pass likelihood filter, driving the probability of rare correct traces to statistical extinction. To counteract this collapse without discarding the base model's latent diversity, we propose Amortized Reasoning Tree Search (ARTS). Unlike standard approaches that force internalization via parameter updates, ARTS prioritizes deliberation by decoupling generation from verification. We introduce a Flow Matching objective that repurposes the verifier to estimate the conservation of probability flow, enabling robust navigation through sparse, high-entropy search spaces where traditional discriminative objectives fail. Extensive experiments on the MATH-500 benchmark demonstrate that ARTS achieves a performance of 74.6% (BoN@16), effectively matching fully fine-tuned policies (74.7%) without modifying the generative backbone. Crucially, on the long-tail subset where coupled RL optimization collapses to 0% pass@k, ARTS uniquely recovers significant performance, suggesting that disentangling verification from generation offers a more robust pathway for solving complex reasoning tasks.
翻译:可验证奖励强化学习已成为在大型语言模型中培养严谨推理能力的主导范式。尽管该方法能有效强化主导行为模式,我们发现这种对齐过程中存在一个关键缺陷:对有效但罕见(在基础模型分布中似然率较低)推理路径的系统性抑制。我们从理论上将这种现象描述为"归一化挤压"——模式寻求型策略梯度与有限采样的相互作用构成了一个高通似然滤波器,导致罕见正确推理轨迹的概率趋于统计性消亡。为在不牺牲基础模型潜在多样性的前提下应对这种坍缩,我们提出摊销推理树搜索方法。与通过参数更新强制内化的标准方法不同,ARTS通过解耦生成与验证来优先实现审慎推理。我们引入流匹配目标,将验证器重新用于估计概率流的守恒性,从而能够在传统判别性目标失效的稀疏高熵搜索空间中进行稳健导航。在MATH-500基准测试上的大量实验表明,ARTS实现了74.6%的性能表现(BoN@16),在不修改生成主干的情况下有效匹配了完全微调的策略(74.7%)。关键的是,在耦合RL优化完全失效(0% pass@k)的长尾子集上,ARTS独特地恢复了显著性能,这表明将验证与生成解耦为解决复杂推理任务提供了更稳健的路径。