Unsupervised reinforcement learning with verifiable rewards (URLVR) offers a pathway to scale LLM training beyond the supervision bottleneck by deriving rewards without ground truth labels. Recent works leverage model intrinsic signals, showing promising early gains, yet their potential and limitations remain unclear. In this work, we revisit URLVR and provide a comprehensive analysis spanning taxonomy, theory and extensive experiments. We first classify URLVR methods into intrinsic versus external based on reward sources, then establish a unified theoretical framework revealing that all intrinsic methods converge toward sharpening the model's initial distribution This sharpening mechanism succeeds when initial confidence aligns with correctness but fails catastrophically when misaligned. Through systematic experiments, we show intrinsic rewards consistently follow a rise-then-fall pattern across methods, with collapse timing determined by model prior rather than engineering choices. Despite these scaling limits, we find intrinsic rewards remain valuable in test-time training on small datasets, and propose Model Collapse Step to measure model prior, serving as a practical indicator for RL trainability. Finally, we explore external reward methods that ground verification in computational asymmetries, showing preliminary evidence they may escape the confidence-correctness ceiling. Our findings chart boundaries for intrinsic URLVR while motivating paths toward scalable alternatives.
翻译:无监督可验证奖励强化学习(URLVR)通过无需真实标签的奖励获取机制,为突破监督学习瓶颈、扩展大语言模型训练规模提供了可能路径。现有研究利用模型内在信号已展现出初步成效,但其潜力与局限尚不明确。本文重新审视URLVR方法,通过分类体系、理论分析与大规模实验进行系统性探究。首先依据奖励来源将URLVR方法划分为内在型与外部型,继而建立统一理论框架,揭示所有内在型方法最终都会收敛于对模型初始分布的锐化过程。该锐化机制在初始置信度与答案正确性一致时有效,而在二者错配时会导致灾难性失效。系统性实验表明,内在奖励在不同方法中均呈现先升后降的变化规律,其崩溃时机由模型先验决定而非工程选择。尽管存在扩展性局限,我们发现内在奖励在小数据集测试时训练中仍具价值,并提出通过"模型崩溃步数"量化模型先验,以此作为强化学习可训练性的实用指标。最后,我们探索了基于计算不对称性进行验证的外部奖励方法,初步证据表明这类方法可能突破置信度-正确性的理论上限。本研究界定了内在型URLVR的扩展边界,同时为发展可扩展替代方案指明了方向。