Although humans inherently have diverse values, current large language model (LLM) alignment methods often assume that aligning LLMs with the general public's preferences is optimal. A major challenge in adopting a more individualized approach to LLM alignment is its lack of scalability, as it involves repeatedly acquiring preference data and training new reward models and LLMs for each individual's preferences. To address these challenges, we propose a new paradigm where users specify what they value most within the system message, steering the LLM's generation behavior to better align with the user's intentions. However, a naive application of such an approach is non-trivial since LLMs are typically trained on a uniform system message (e.g., "You are a helpful assistant") which limits their ability to generalize to diverse, unseen system messages. To improve this generalization, we create the Multifaceted Collection, a preference dataset with 192k combinations of values beyond generic helpfulness and harmlessness, spanning 65k user instructions. Using this dataset, we train a 7B LLM called Janus and test it on 921 prompts from 5 benchmarks (AlpacaEval 2.0, FLASK, Koala, MT-Bench, and Self-Instruct) by adding various unseen system messages that reflect user preferences. Janus achieves tie+win rate of 75.2%, 72.4%, and 66.4% against Mistral 7B Instruct v0.2, GPT-3.5 Turbo, and GPT-4, respectively. Unexpectedly, on three benchmarks focused on response helpfulness (AlpacaEval 2.0, MT-Bench, Arena Hard Auto v0.1), Janus also outperforms LLaMA 3 8B Instruct by a +4.0%, +0.1%, +3.0% margin, underscoring that training with a vast array of system messages could also enhance alignment to the general public's preference as well. Our code, dataset, benchmark, and models are available at https://github.com/kaistAI/Janus.
翻译:尽管人类天生具有多元化的价值观,但当前大型语言模型(LLM)的对齐方法通常假设将LLM与大众偏好对齐是最优解。采用更个性化LLM对齐方法的主要挑战在于其缺乏可扩展性,因为这需要为每个个体的偏好重复获取偏好数据并训练新的奖励模型和LLM。为解决这些挑战,我们提出一种新范式:用户在系统消息中指定其最重视的价值观,从而引导LLM的生成行为以更好地符合用户意图。然而,这种方法的直接应用并非易事,因为LLM通常基于统一的系统消息(例如“你是一个乐于助人的助手”)进行训练,这限制了其泛化到多样化、未见过的系统消息的能力。为提升这种泛化能力,我们创建了“多面体数据集”——一个包含192k种超越通用助益性与无害性的价值观组合的偏好数据集,涵盖65k条用户指令。利用该数据集,我们训练了一个名为Janus的7B参数LLM,并通过添加反映用户偏好的各类未见系统消息,在5个基准测试(AlpacaEval 2.0、FLASK、Koala、MT-Bench和Self-Instruct)的921条提示上对其进行测试。Janus相较于Mistral 7B Instruct v0.2、GPT-3.5 Turbo和GPT-4分别取得了75.2%、72.4%和66.4%的平局+胜率。出乎意料的是,在专注于响应助益性的三个基准测试(AlpacaEval 2.0、MT-Bench、Arena Hard Auto v0.1)中,Janus同样以+4.0%、+0.1%、+3.0%的优势超越了LLaMA 3 8B Instruct,这表明通过大量系统消息进行训练也能增强模型与大众偏好的对齐程度。我们的代码、数据集、基准测试和模型已发布于https://github.com/kaistAI/Janus。