Multi-objective alignment aims to align LLM responses with multiple human preference objectives. Among existing methods, guiding the generation of frozen LLMs through autoregressive reward models (ARMs) to accomplish multi-objective test-time alignment is a low-cost solution. However, these methods typically rely on independent parameters for each preference objective, either by training ARMs independently across preference dimensions, which neglects interactions among preference features, or by training a single ARM with separate feature extraction modules for each preference, which can cause feature entanglement. Both strategies can result in misalignment between generated outputs and user preferences. To address this limitation, we propose Preference-Modulated \& Shared Low-Rank Adaptation (MoSLoRA) for ARM training, which first extracts shared features via a preference-agnostic module and then applies affine transformations to shared features via a preference modulation module conditioned on mixed preference vectors. This design mitigates feature entanglement and enables precise control over preference trade-offs during inference. Building on this, we introduce the Unified Autoregressive Reward Model (UniARM), a novel framework for multi-objective test-time alignment. UniARM jointly models all preference dimensions in a single parameter space, eliminating the need for independent parameters for each preference objective. es on larger-scale LLMs, enhancing its practical usability.
翻译:多目标对齐旨在使大语言模型的响应与多个人类偏好目标保持一致。在现有方法中,通过自回归奖励模型引导冻结大语言模型的生成以实现多目标测试时对齐是一种低成本解决方案。然而,这些方法通常依赖针对每个偏好目标的独立参数:要么通过在偏好维度上独立训练多个ARM,这忽略了偏好特征间的相互作用;要么通过训练单个ARM但为每个偏好配备独立的特征提取模块,这可能导致特征纠缠。两种策略均可能导致生成输出与用户偏好之间的错位。为解决这一局限,我们提出用于ARM训练的偏好调制与共享低秩自适应方法,该方法首先通过一个与偏好无关的模块提取共享特征,然后基于混合偏好向量,通过偏好调制模块对共享特征进行仿射变换。该设计缓解了特征纠缠问题,并在推理过程中实现了对偏好权衡的精确控制。在此基础上,我们提出了统一自回归奖励模型,这是一种用于多目标测试时对齐的新型框架。UniARM在单一参数空间中联合建模所有偏好维度,无需为每个偏好目标配置独立参数。实验表明,UniARM在多个对齐基准测试中显著优于现有方法,同时展现出卓越的泛化能力和计算效率。此外,UniARM可无缝扩展至更大规模的大语言模型,从而增强了其实用性。