Scaled post-training now drives many of the largest capability gains in language models (LMs), yet its effect on pretrained knowledge remains poorly understood. Not all forgetting is equal: Forgetting one fact (e.g., a U.S. president or an API call) does not "average out" by recalling another. Hence, we propose a sample-wise paradigm to measure what is forgotten and when backward transfer occurs. Our metric counts 1->0 transitions (correct before post-training, incorrect after) to quantify forgetting and 0->1 transitions to quantify backward transfer. Traditional task averages conflate these effects and obscure large changes. For multiple-choice benchmarks, we add chance-adjusted variants that subtract the expected contribution of random guessing from pre- and post-training accuracies. We apply this framework across post-training stages, model sizes, and data scales. Our large-scale analysis shows that: (1) Domain-continual pretraining induces moderate forgetting with low-to-moderate backward transfer; (2) RL/SFT post-training applied to base models and Instruction tuning yields moderate-to-large backward transfer on math and logic with overall low-to-moderate forgetting; (3) Applying RL/SFT to instruction-tuned models is sensitive on data scale: at small scales, both forgetting and backward transfer are small; at larger scales, effects are mixed and warrant further study with better controls; (4) Model merging does not reliably mitigate forgetting. Overall, our framework offers a practical yardstick for mapping how post-training alters pretrained knowledge at scale -- enabling progress towards generally capable AI systems.
翻译:规模化后训练已成为推动语言模型能力大幅提升的关键因素,但其对预训练知识的影响机制尚不明确。并非所有遗忘都具有同等影响:遗忘某一事实(如一位美国总统或某个API调用)并不能通过回忆另一事实而"平均抵消"。因此,我们提出一种样本级测量范式,用以追踪具体遗忘内容及后向迁移的发生时机。本方法通过统计1→0转变(后训练前正确而后错误)量化遗忘,通过0→1转变量化后向迁移。传统任务平均指标会混淆这些效应并掩盖重大变化。针对多项选择题基准,我们引入机会调整变体,从前后训练准确率中扣除随机猜测的预期贡献。我们将此框架应用于不同后训练阶段、模型规模及数据尺度的大规模分析表明:(1)领域持续预训练引发中度遗忘,伴随低至中度后向迁移;(2)对基础模型应用的RL/SFT后训练及指令微调在数学逻辑任务上产生中至高度后向迁移,总体呈现低至中度遗忘;(3)对指令微调模型实施RL/SFT对数据规模敏感:小规模时遗忘与后向迁移均较弱;扩大规模后效应复杂,需通过更佳控制变量进一步研究;(4)模型融合技术无法稳定缓解遗忘。总体而言,本框架为测绘规模化后训练如何改变预训练知识提供了实用标尺——为构建通用人工智能系统铺平道路。