Recent methods for aligning large language models (LLMs) with human feedback predominantly rely on a single reference model, which limits diversity, model overfitting, and underutilizes the wide range of available pre-trained models. Incorporating multiple reference models has the potential to address these limitations by broadening perspectives, reducing bias, and leveraging the strengths of diverse open-source LLMs. However, integrating multiple reference models into reinforcement learning with human feedback (RLHF) frameworks poses significant theoretical challenges, where achieving exact solutions has remained an open problem. This paper presents the first \emph{exact solution} to the multiple reference model problem in reverse KL-regularized RLHF. We introduce a comprehensive theoretical framework that includes rigorous statistical analysis and provides sample complexity guarantees. Additionally, we extend our analysis to forward KL-regularized RLHF, offering new insights into sample complexity requirements in multiple reference scenarios. Our contributions lay the foundation for more advanced and adaptable LLM alignment techniques, enabling the effective use of multiple reference models. This work paves the way for developing alignment frameworks that are both theoretically sound and better suited to the challenges of modern AI ecosystems.
翻译:当前将大语言模型与人类反馈对齐的主流方法主要依赖单一参考模型,这限制了多样性、导致模型过拟合,且未能充分利用现有预训练模型的广泛资源。引入多参考模型有望通过拓宽视角、减少偏见并整合多样化开源大语言模型的优势来解决这些局限。然而,将多参考模型融入基于人类反馈的强化学习框架存在显著的理论挑战,其精确求解问题长期悬而未决。本文首次提出反向KL正则化RLHF中多参考模型问题的精确解。我们构建了一个包含严格统计分析的完整理论框架,并提供了样本复杂度保证。此外,我们将分析扩展至前向KL正则化RLHF,为多参考场景下的样本复杂度需求提供了新见解。本研究的贡献为开发更先进、适应性更强的大语言模型对齐技术奠定了基础,使多参考模型的有效运用成为可能。这项工作为构建理论严谨且更适应现代人工智能生态系统挑战的对齐框架开辟了新路径。