Effective exploration is a key challenge in reinforcement learning for large language models: discovering high-quality trajectories within a limited sampling budget from the vast natural language sequence space. Existing methods face notable limitations: GRPO samples exclusively from the root, saturating high-probability trajectories while leaving deep, error-prone states under-explored. Tree-based methods blindly disperse budgets across trivial or unrecoverable states, causing sampling dilution that fails to uncover rare correct suffixes and destabilizes local baselines. To address this, we propose Deep Dense Exploration (DDE), a strategy that focuses exploration on $\textit{pivots}$-deep, recoverable states within unsuccessful trajectories. We instantiate DDE with DEEP-GRPO, which introduces three key innovations: (1) a lightweight data-driven utility function that automatically balances recoverability and depth bias to identify pivot states; (2) local dense resampling at each pivot to increase the probability of discovering correct subsequent trajectories; and (3) a dual-stream optimization objective that decouples global policy learning from local corrective updates. Experiments on mathematical reasoning benchmarks demonstrate that our method consistently outperforms GRPO, tree-based methods, and other strong baselines.
翻译:有效探索是大语言模型强化学习中的关键挑战:如何在有限采样预算下,从广阔的自然语言序列空间中发现高质量轨迹。现有方法存在显著局限:GRPO仅从根节点采样,使高概率轨迹饱和的同时,对深层易错状态的探索不足。基于树结构的方法将预算盲目分散于平凡或不可恢复状态,导致采样稀释现象,既无法发现罕见的正确后缀,也破坏了局部基线的稳定性。为解决这些问题,我们提出深度密集探索策略,将探索聚焦于失败轨迹中的$\textit{枢轴状态}$——即具有可恢复性的深层状态。我们通过DEEP-GRPO实现了该策略,其包含三项核心创新:(1)轻量级数据驱动的效用函数,可自动平衡可恢复性与深度偏好以识别枢轴状态;(2)在每个枢轴处进行局部密集重采样,以提升发现后续正确轨迹的概率;(3)解耦全局策略学习与局部修正更新的双流优化目标。在数学推理基准测试上的实验表明,本方法在各项指标上均稳定优于GRPO、基于树结构的方法及其他强基线模型。