Current approaches for strengthening LLM reasoning tend to introduce a training bias toward human-like reasoning trajectories. In step-wise preference optimization, in particular, dependence on human or higher-capacity model annotations for intermediate steps limits exploration of alternative, non-human-like reasoning paths and thus constrains achievable performance. Furthermore, through a small-scale pilot study, we observed that in approximately 75% of cases, the model's first erroneous step occurs after the lowest-confidence point. This suggests that guiding the model at its lowest-confidence point before an error provides more accurate supervision than locating the first explicit error. In this paper, we propose Confidence-Guided Reasoning Path Preference Optimization (CGPO), a method that leverages a confidence signal to identify points of maximal uncertainty in the model's reasoning process and applies self-generated, non-human-like reasoning-path guidance to mitigate trajectory drift. Our experiments span diverse models applied to both code and mathematical reasoning tasks. The results show that, with the same amount of training data, our method using data generated by a small model can achieve better performance in most cases compared with approaches using data generated by a strong model or human-annotated.
翻译:当前强化大型语言模型推理能力的方法往往会在训练中引入对人类式推理轨迹的偏好。特别是在逐步偏好优化中,对人工或更高能力模型中间步骤标注的依赖,限制了对替代性非人类推理路径的探索,从而制约了可达到的性能上限。此外,通过小规模试点研究,我们观察到在约75%的情况下,模型的首次错误步骤出现在最低置信度点之后。这表明在错误发生前,于模型最低置信度点提供指导,比定位首个显式错误能产生更精确的监督信号。本文提出置信度引导的推理路径偏好优化方法,该方法利用置信度信号识别模型推理过程中的最大不确定性节点,并应用自主生成的非人类推理路径指导来缓解轨迹漂移。我们的实验涵盖多种模型在代码与数学推理任务上的应用。结果表明,在训练数据量相同的情况下,相较于使用强模型生成数据或人工标注的方法,我们采用小模型生成数据的方法在多数情况下能取得更优的性能。