This paper presents a novel approach to aligning large language models (LLMs) with individual human preferences, sometimes referred to as Reinforcement Learning from \textit{Personalized} Human Feedback (RLPHF). Given stated preferences along multiple dimensions, such as helpfulness, conciseness, or humor, the goal is to create an LLM without re-training that best adheres to this specification. Starting from specialized expert LLMs, each trained for one such particular preference dimension, we propose a black-box method that merges their outputs on a per-token level. We train a lightweight Preference Control Model (PCM) that dynamically translates the preference description and current context into next-token prediction weights. By combining the expert models' outputs at the token level, our approach dynamically generates text that optimizes the given preference. Empirical tests show that our method matches or surpasses existing preference merging techniques, providing a scalable, efficient alternative to fine-tuning LLMs for individual personalization.
翻译:本文提出了一种将大语言模型(LLMs)与个体人类偏好对齐的新方法,该方法有时被称为基于个性化人类反馈的强化学习(RLPHF)。给定在多维度(如帮助性、简洁性或幽默感)上陈述的偏好,目标是在无需重新训练的情况下构建一个最符合该规格的LLM。我们从针对特定偏好维度分别训练的专家LLM出发,提出一种黑盒方法,在词元级别上融合它们的输出。我们训练了一个轻量级的偏好控制模型(PCM),该模型能够动态地将偏好描述与当前上下文转换为下一词元预测权重。通过在词元层面组合专家模型的输出,我们的方法能够动态生成优化给定偏好的文本。实证测试表明,本方法在性能上匹配或超越了现有的偏好融合技术,为面向个体个性化的LLM微调提供了一种可扩展且高效的替代方案。