Reinforcement Learning (RL) is crucial for unlocking the complex reasoning capabilities of Diffusion-based Large Language Models (dLLMs). However, applying RL to dLLMs faces unique challenges in efficiency and stability. To address these challenges, we propose Spatio-Temporal Pruning (STP), a framework designed to simultaneously improve the efficiency and stability of RL for dLLMs. STP compresses the redundancy in the generative process through: (1) \textit{spatial pruning}, which constrains the exploration space using static priors; and (2) \textit{temporal pruning}, which bypasses redundant late-stage refinement steps. Our theoretical analysis demonstrates that STP strictly reduces the variance of the log-likelihood estimation, thereby ensuring more stable policy updates. Extensive experiments demonstrate that STP surpasses state-of-the-art baselines in both efficiency and accuracy. Our code is available at https://github.com/Lolo1222/STP.
翻译:强化学习(RL)对于释放基于扩散的大语言模型(dLLMs)的复杂推理能力至关重要。然而,将RL应用于dLLMs在效率和稳定性方面面临着独特的挑战。为了应对这些挑战,我们提出了时空剪枝(STP)框架,该框架旨在同时提升dLLMs强化学习的效率与稳定性。STP通过以下方式压缩生成过程中的冗余:(1) \textit{空间剪枝},利用静态先验约束探索空间;(2) \textit{时间剪枝},绕过冗余的后期精炼步骤。我们的理论分析表明,STP严格降低了对数似然估计的方差,从而确保了更稳定的策略更新。大量实验证明,STP在效率和准确性方面均超越了最先进的基线方法。我们的代码发布于 https://github.com/Lolo1222/STP。