Mainstream approaches to aligning large language models (LLMs) heavily rely on human preference data, particularly when models require periodic updates. The standard process for iterative alignment of LLMs involves collecting new human feedback for each update. However, the data collection process is costly and challenging to scale. To address this issue, we introduce the "TS-Align" framework, which fine-tunes a policy model using pairwise feedback data automatically mined from its outputs. This automatic mining process is efficiently accomplished through the collaboration between a large-scale teacher model and a small-scale student model. The policy fine-tuning process can be iteratively repeated using on-policy generations within our proposed teacher-student collaborative framework. Through extensive experiments, we demonstrate that our final aligned policy outperforms the base policy model with an average win rate of 69.7% across seven conversational or instruction-following datasets. Furthermore, we show that the ranking capability of the teacher is effectively distilled into the student through our pipeline, resulting in a small-scale yet effective reward model for policy model alignment.
翻译:主流的大规模语言模型对齐方法严重依赖于人类偏好数据,这在模型需要定期更新时尤为明显。大规模语言模型迭代对齐的标准流程涉及为每次更新收集新的人类反馈。然而,数据收集过程成本高昂且难以扩展。为解决此问题,我们提出了“TS-Align”框架,该框架利用从其输出中自动挖掘的成对反馈数据来微调策略模型。这种自动挖掘过程通过一个大尺度教师模型和一个小尺度学生模型之间的协作高效完成。在我们提出的师生协同框架内,策略微调过程可以利用在线策略生成进行迭代重复。通过大量实验,我们证明最终对齐的策略模型优于基础策略模型,在七个对话或指令遵循数据集上平均胜率达到69.7%。此外,我们表明教师的排序能力通过我们的流程被有效地蒸馏到学生模型中,从而产生了一个小规模但有效的、用于策略模型对齐的奖励模型。