Although humans inherently have diverse values, current large language model (LLM) alignment methods often assume that aligning LLMs with the general public's preferences is optimal. A major challenge in adopting a more individualized approach to LLM alignment is its lack of scalability, as it involves repeatedly acquiring preference data and training new reward models and LLMs for each individual's preferences. To address these challenges, we propose a new paradigm where users specify what they value most within the system message, steering the LLM's generation behavior to better align with the user's intentions. However, a naive application of such an approach is non-trivial since LLMs are typically trained on a uniform system message (e.g., "You are a helpful assistant") which limits their ability to generalize to diverse, unseen system messages. To improve this generalization, we create the Multifaceted Collection, a preference dataset with 192k combinations of values beyond generic helpfulness and harmlessness, spanning 65k user instructions. Using this dataset, we train a 7B LLM called Janus and test it on 921 prompts from 5 benchmarks (AlpacaEval 2.0, FLASK, Koala, MT-Bench, and Self-Instruct) by adding various unseen system messages that reflect user preferences. Janus achieves tie+win rate of 75.2%, 72.4%, and 66.4% against Mistral 7B Instruct v0.2, GPT-3.5 Turbo, and GPT-4, respectively. Unexpectedly, on three benchmarks focused on response helpfulness (AlpacaEval 2.0, MT-Bench, Arena Hard Auto v0.1), Janus also outperforms LLaMA 3 8B Instruct by a +4.0%, +0.1%, +3.0% margin, underscoring that training with a vast array of system messages could also enhance alignment to the general public's preference as well. Our code, dataset, benchmark, and models are available at https://github.com/kaistAI/Janus.
翻译:尽管人类天生具有多样化的价值观,但当前的大语言模型(LLM)对齐方法通常默认将LLM与大众偏好对齐是最优解。采用更个性化LLM对齐方法的主要挑战在于其可扩展性不足,因为这需要为每个个体的偏好重复获取偏好数据并训练新的奖励模型和LLM。为解决这些挑战,我们提出了一种新范式:用户通过在系统消息中明确其最重视的价值取向,从而引导LLM的生成行为以更好地契合用户意图。然而,直接应用这种方法并非易事,因为LLM通常在统一的系统消息(例如“你是一个有用的助手”)上进行训练,这限制了其泛化到多样且未见过的系统消息的能力。为提升这种泛化能力,我们创建了“多维度偏好数据集”——一个包含19.2万种超越通用助益性与无害性价值组合的偏好数据集,涵盖6.5万条用户指令。利用该数据集,我们训练了一个名为Janus的70亿参数LLM,并通过添加反映用户偏好的各类未见系统消息,在来自5个基准测试(AlpacaEval 2.0、FLASK、Koala、MT-Bench和Self-Instruct)的921个提示上对其进行评估。Janus相较于Mistral 7B Instruct v0.2、GPT-3.5 Turbo和GPT-4分别取得了75.2%、72.4%和66.4%的平局+胜率。值得注意的是,在专注于响应助益性的三个基准测试(AlpacaEval 2.0、MT-Bench、Arena Hard Auto v0.1)中,Janus同样以+4.0%、+0.1%、+3.0%的优势超越了LLaMA 3 8B Instruct,这表明通过大量系统消息进行训练也能增强模型与大众偏好的对齐能力。我们的代码、数据集、基准测试和模型已在https://github.com/kaistAI/Janus 公开。