Self-alignment, whereby models learn to improve themselves without human annotation, is a rapidly growing research area. However, existing techniques often fail to improve complex reasoning tasks due to the difficulty of assigning correct rewards. An orthogonal approach that is known to improve correctness is self-consistency, a method applied at inference time based on multiple sampling in order to find the most consistent answer. In this work, we extend the self-consistency concept to help train models. We thus introduce self-consistency preference optimization (ScPO), which iteratively trains consistent answers to be preferred over inconsistent ones on unsupervised new problems. We show ScPO leads to large improvements over conventional reward model training on reasoning tasks such as GSM8K and MATH, closing the gap with supervised training with gold answers or preferences, and that combining ScPO with standard supervised learning improves results even further. On ZebraLogic, ScPO finetunes Llama-3 8B to be superior to Llama-3 70B, Gemma-2 27B, and Claude-3 Haiku.
翻译:自对齐是指模型无需人工标注即可自我提升的学习方法,这一研究领域正在快速发展。然而,由于难以分配正确的奖励,现有技术往往无法有效提升复杂推理任务的性能。一种已知能提升正确性的正交方法是自一致性——一种在推理时基于多次采样以寻找最一致答案的方法。在本研究中,我们将自一致性概念扩展至模型训练过程。为此,我们提出了自一致性偏好优化(ScPO),该方法通过在无标注的新问题上迭代训练模型,使一致答案的偏好度高于不一致答案。我们证明,在GSM8K和MATH等推理任务上,ScPO相比传统的奖励模型训练能带来显著提升,缩小了与使用标准答案或偏好进行监督训练之间的差距,且将ScPO与标准监督学习结合能进一步提升效果。在ZebraLogic任务中,经ScPO微调的Llama-3 8B模型性能超越了Llama-3 70B、Gemma-2 27B及Claude-3 Haiku。