Recent self-rewarding large language models (LLM) have successfully applied LLM-as-a-Judge to iteratively improve the alignment performance without the need of human annotations for preference data. These methods commonly utilize the same LLM to act as both the policy model (which generates responses) and the reward model (which scores and ranks those responses). The ranked responses are then used as preference pairs to train the LLM via direct alignment technologies (e.g. DPO). However, it is noteworthy that throughout this process, there is no guarantee of accuracy in the rewarding and ranking, which is critical for ensuring accurate rewards and high-quality preference data. Empirical results from relatively small LLMs (e.g., 7B parameters) also indicate that improvements from self-rewarding may diminish after several iterations in certain situations, which we hypothesize is due to accumulated bias in the reward system. This bias can lead to unreliable preference data for training the LLM. To address this issue, we first formulate and analyze the generalized iterative preference fine-tuning framework for self-rewarding language model. We then introduce the regularization to this generalized framework to mitigate the overconfident preference labeling in the self-rewarding process. Based on this theoretical insight, we propose a Consistency Regularized sElf-rewarding lAnguage Model (CREAM) that leverages the rewarding consistency across different iterations to regularize the self-rewarding training, helping the model to learn from more reliable preference data. With this explicit regularization, our empirical results demonstrate the superiority of CREAM in improving both reward consistency and alignment performance. The code is publicly available at https://github.com/Raibows/CREAM.
翻译:近期提出的自奖励大语言模型(LLM)成功应用了LLM-as-a-Judge机制,在无需人工标注偏好数据的情况下迭代提升对齐性能。这类方法通常使用同一LLM同时作为策略模型(生成响应)和奖励模型(对响应进行评分与排序),并将排序后的响应作为偏好对通过直接对齐技术(如DPO)训练LLM。然而值得注意的是,该过程中奖励与排序的准确性无法得到保证,而这对于确保奖励准确性和偏好数据质量至关重要。相对较小规模LLM(如70亿参数)的实验结果也表明,在某些情况下自奖励带来的性能提升可能在数次迭代后减弱,我们推测这是由于奖励系统中累积的偏差所致。这种偏差可能导致训练LLM所用的偏好数据不可靠。为解决该问题,我们首先对自奖励语言模型的广义迭代偏好微调框架进行形式化分析与理论推导,随后在该广义框架中引入正则化以缓解自奖励过程中过度自信的偏好标注。基于这一理论洞见,我们提出一致性正则化自奖励语言模型(CREAM),利用不同迭代周期间的奖励一致性来正则化自奖励训练过程,帮助模型从更可靠的偏好数据中学习。通过这种显式正则化,我们的实验结果表明CREAM在提升奖励一致性和对齐性能方面均具有优越性。代码已公开于https://github.com/Raibows/CREAM。