Learning in the combinatorially large output space of sequence generation problems is challenging as providing expert demonstrations scales poorly with sequence length, and RL struggles with sparse rewards. Between dense demonstrations in supervised training and no demonstrations in reinforcement learning lies an underexplored regime: partial supervision. We ask whether some classes of sequence learning problems become efficiently learnable by exploiting this gap. We address this by introducing adaptive backtracking (AdaBack), a per-sample curriculum learning algorithm that reveals a partial prefix of the target output. The supervision length is adjusted dynamically for each sample based on the model's past reward signal, allowing it to incrementally learn to complete reasoning chains by conditioning on correct partial solutions. We investigate this intermediate regime between SFT and RL and argue that per-sample curriculum learning is more than a trade-off between efficiency and generality--it can succeed in tasks with long sequences of latent dependencies where SFT and RL both fail to generalize. Using a synthetic task with latent parity constraints, we show that AdaBack reliably solves problems that are otherwise intractable. On three mathematical reasoning benchmarks, DeepScaleR, MATH, and GSM8k, we find that AdaBack enables models to solve problems that RL alone cannot, acquiring new reasoning capabilities through incremental exposure to partial solutions.
翻译:在序列生成问题的组合爆炸输出空间中学习具有挑战性,因为专家示范的提供成本随序列长度急剧增加,而强化学习则受限于稀疏奖励。在监督训练中的密集示范与强化学习中无示范之间,存在一个尚未充分探索的中间范式:部分监督。本文探讨是否某些类别的序列学习问题能通过利用这一间隙实现高效学习。我们提出自适应回溯算法(AdaBack),一种基于样本的课程学习方法,该方法逐步揭示目标输出的部分前缀片段。监督长度会根据模型历史奖励信号对每个样本进行动态调整,使其能够通过以正确的部分解为条件,逐步学习完成推理链。我们研究了监督微调与强化学习之间的这一中间范式,并论证基于样本的课程学习不仅是效率与泛化性的折衷——它能在具有长程隐式依赖关系的任务中取得成功,而此类任务中监督微调和强化学习均无法实现泛化。通过在具有隐式奇偶约束的合成任务上的实验,我们证明AdaBack能可靠解决原本难以处理的问题。在三个数学推理基准测试(DeepScaleR、MATH和GSM8k)中,我们发现AdaBack使模型能够解决单独使用强化学习无法攻克的问题,通过渐进式接触部分解获得了新的推理能力。