Alignment, endowing a pre-trained Large language model (LLM) with the ability to follow instructions, is crucial for its real-world applications. Conventional supervised fine-tuning (SFT) methods formalize it as causal language modeling typically with a cross-entropy objective, requiring a large amount of high-quality instruction-response pairs. However, the quality of widely used SFT datasets can not be guaranteed due to the high cost and intensive labor for the creation and maintenance in practice. To overcome the limitations associated with the quality of SFT datasets, we introduce a novel \textbf{p}reference-\textbf{o}riented supervised \textbf{f}ine-\textbf{t}uning approach, namely PoFT. The intuition is to boost SFT by imposing a particular preference: \textit{favoring the target model over aligned LLMs on the same SFT data.} This preference encourages the target model to predict a higher likelihood than that predicted by the aligned LLMs, incorporating assessment information on data quality (i.e., predicted likelihood by the aligned LLMs) into the training process. Extensive experiments are conducted, and the results validate the effectiveness of the proposed method. PoFT achieves stable and consistent improvements over the SFT baselines across different training datasets and base models. Moreover, we prove that PoFT can be integrated with existing SFT data filtering methods to achieve better performance, and further improved by following preference optimization procedures, such as DPO.
翻译:对齐——赋予预训练大型语言模型(LLM)遵循指令的能力——对其实际应用至关重要。传统的监督微调(SFT)方法通常将其形式化为因果语言建模,并采用交叉熵目标函数,这需要大量高质量的指令-响应对。然而,由于实践中创建和维护的高成本与密集劳动,广泛使用的SFT数据集的质量无法得到保证。为克服SFT数据集质量相关的局限性,我们引入了一种新颖的**偏**好导**向**的**监**督**微**调方法,即PoFT。其核心思想是通过施加一种特定偏好来增强SFT:*在同一SFT数据上,使目标模型优于已对齐的LLM。* 这种偏好鼓励目标模型预测出比已对齐LLM更高的似然度,从而将对数据质量(即已对齐LLM预测的似然度)的评估信息纳入训练过程。我们进行了广泛的实验,结果验证了所提方法的有效性。在不同的训练数据集和基础模型上,PoFT相较于SFT基线均实现了稳定且一致的改进。此外,我们证明PoFT可与现有的SFT数据过滤方法结合以获得更好的性能,并能通过遵循偏好优化流程(如DPO)得到进一步改进。