Direct Preference Optimization (DPO) has proven effective at improving the performance of large language models (LLMs) on downstream tasks such as reasoning and alignment. In this work, we propose Step-Controlled DPO (SCDPO), a method for automatically providing stepwise error supervision by creating negative samples of mathematical reasoning rationales that start making errors at a specified step. By applying these samples in DPO training, SCDPO can better align the model to understand reasoning errors and output accurate reasoning steps. We apply SCDPO to both code-integrated and chain-of-thought solutions, empirically showing that it consistently improves the performance compared to naive DPO on three different SFT models, including one existing SFT model and two models we finetuned. Qualitative analysis of the credit assignment of SCDPO and DPO demonstrates the effectiveness of SCDPO at identifying errors in mathematical solutions. We then apply SCDPO to an InternLM2-20B model, resulting in a 20B model that achieves high scores of 88.5% on GSM8K and 58.1% on MATH, rivaling all other open-source LLMs, showing the great potential of our method.
翻译:直接偏好优化(DPO)已被证明能有效提升大语言模型在推理和对齐等下游任务上的性能。本研究提出逐步可控直接偏好优化(SCDPO),该方法通过创建在指定步骤开始出现错误的数学推理过程负样本,自动提供逐步误差监督。将这些样本应用于DPO训练后,SCDPO能更好地使模型理解推理错误并输出准确的推理步骤。我们将SCDPO应用于代码集成与思维链两种解决方案,实证表明相较于基础DPO方法,SCDPO在三种不同的监督微调模型(包括一个现有SFT模型和两个我们自行微调的模型)上均能持续提升性能。对SCDPO与DPO信用分配的定性分析证明了SCDPO在识别数学解题错误方面的有效性。随后我们将SCDPO应用于InternLM2-20B模型,获得的200亿参数模型在GSM8K和MATH数据集上分别取得88.5%和58.1%的高分,性能媲美所有其他开源大语言模型,这充分展现了本方法的巨大潜力。