The alignment of large language models (LLMs) with human preferences remains a key challenge. While post-training techniques like Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO) have achieved notable success, they often introduce computational inefficiencies and training instability. In this paper, we propose Feature-level constrained Preference Optimization (FPO), a novel method designed to simplify the alignment process while ensuring stability. FPO leverages pre-trained Sparse Autoencoders (SAEs) and introduces feature-level constraints, allowing for efficient, sparsity-enforced alignment. Our approach enjoys efficiency by using sparse features activated in a well-trained sparse autoencoder and the quality of sequential KL divergence by using the feature-level offline reference. Experimental results on benchmark datasets demonstrate that FPO achieves a 5.08% absolute improvement in win rate with much lower computational cost compared to state-of-the-art baselines, making it a promising solution for efficient and controllable LLM alignments.
翻译:大型语言模型(LLM)与人类偏好的对齐仍然是一个关键挑战。尽管基于人类反馈的强化学习(RLHF)和直接偏好优化(DPO)等后训练技术已取得显著成功,但它们常带来计算效率低下和训练不稳定的问题。本文提出特征级约束偏好优化(FPO),这是一种旨在简化对齐过程并确保稳定性的新方法。FPO利用预训练的稀疏自编码器(SAE)并引入特征级约束,从而实现高效、稀疏强制的对齐。我们的方法通过使用在训练良好的稀疏自编码器中激活的稀疏特征来保证效率,并通过使用特征级离线参考来保证序列KL散度的质量。在基准数据集上的实验结果表明,与最先进的基线方法相比,FPO以低得多的计算成本实现了5.08%的绝对胜率提升,这使其成为高效且可控的LLM对齐的一个有前景的解决方案。