Achieving effective test-time scaling requires models to engage in In-Context Exploration -- the intrinsic ability to generate, verify, and refine multiple reasoning hypotheses within a single continuous context. Grounded in State Coverage theory, our analysis identifies a critical bottleneck to enabling this capability: while broader state coverage requires longer reasoning trajectories, the probability of sampling such sequences decays exponentially during autoregressive generation, a phenomenon we term the ``Shallow Exploration Trap''. To bridge this gap, we propose Length-Incentivized Exploration(\method). This simple yet effective recipe explicitly encourages models to explore more via a length-based reward coupled with a redundancy penalty, thereby maximizing state coverage in two-step manner. Comprehensive experiments across different models (Qwen3, Llama) demonstrate that \method effectively incentivize in-context exploration. As a result, our method achieves an average improvement of 4.4\% on in-domain tasks and a 2.7\% gain on out-of-domain benchmarks.
翻译:实现有效的测试时扩展要求模型具备上下文探索能力——即在单一连续上下文中生成、验证和优化多个推理假设的内在能力。基于状态覆盖理论的分析揭示,实现此能力存在一个关键瓶颈:虽然更广泛的状态覆盖需要更长的推理轨迹,但在自回归生成过程中采样此类序列的概率呈指数衰减,我们将此现象称为“浅层探索陷阱”。为弥合这一差距,我们提出长度激励探索方法。这一简洁而有效的方案通过结合长度奖励与冗余惩罚,明确激励模型进行更深入的探索,从而以两步法最大化状态覆盖。在不同模型上的综合实验表明,该方法能有效激励上下文探索。最终,我们的方法在领域内任务上平均提升4.4%,在领域外基准测试中获得2.7%的性能增益。