The alignment of language models~(LMs) with human preferences is critical for building reliable AI systems. The problem is typically framed as optimizing an LM policy to maximize the expected reward that reflects human preferences. Recently, Direct Preference Optimization~(DPO) was proposed as a LM alignment method that directly optimize the policy from static preference data, and further improved by incorporating on-policy sampling~(i.e., preference candidates generated during the training loop) for better LM alignment. However, we show on-policy data is not always optimal, with systematic effectiveness difference emerging between static and on-policy preference candidates. For example, on-policy data can result in a $3\times$ effectiveness compared with static data for Llama-3, and a $0.4\times$ effectiveness for Zephyr. To explain the phenomenon, we propose the alignment stage assumption, which divides the alignment process into two distinct stages: the preference injection stage, which benefits from diverse data, and the preference fine-tuning stage, which favors high-quality data. Through theoretical and empirical analysis, we characterize these stages and propose an effective algorithm to identify the boundaries between them. We perform experiments on $5$ models~(Llama, Zephyr, Phi-2, Qwen, Pythia) and $2$ alignment methods~(DPO, SLiC-HF) to show the generalizability of alignment stage assumption and the effectiveness of the boundary measurement algorithm.
翻译:语言模型与人类偏好的对齐对于构建可靠的AI系统至关重要。该问题通常被构建为优化语言模型策略以最大化反映人类偏好的期望奖励。最近,直接偏好优化被提出作为一种语言模型对齐方法,可直接从静态偏好数据优化策略,并通过引入在线策略采样(即在训练循环中生成的偏好候选)进一步改进以实现更好的语言模型对齐。然而,我们证明在线策略数据并非总是最优的,静态与在线策略偏好候选之间存在系统性的有效性差异。例如,对于Llama-3,在线策略数据可产生相较于静态数据$3\times$的有效性提升;而对于Zephyr,其有效性仅为静态数据的$0.4\times$。为解释此现象,我们提出对齐阶段假设,将对齐过程划分为两个不同阶段:偏好注入阶段(受益于多样化数据)与偏好微调阶段(偏好高质量数据)。通过理论与实证分析,我们刻画了这些阶段的特征,并提出一种有效算法来识别其边界。我们在$5$个模型(Llama、Zephyr、Phi-2、Qwen、Pythia)和$2$种对齐方法(DPO、SLiC-HF)上进行了实验,以验证对齐阶段假设的普适性及边界测量算法的有效性。