Aligning with personalized preferences, which vary significantly across cultural, educational, and political differences, poses a significant challenge due to the computational costs and data demands of traditional alignment methods. In response, this paper presents Personalized Alignment at Decoding-time (PAD), a novel framework designed to align LLM outputs with diverse personalized preferences during the inference phase, eliminating the need for additional training. By introducing a unique personalized reward modeling strategy, this framework decouples the text generation process from personalized preferences, facilitating the generation of generalizable token-level personalized rewards. The PAD algorithm leverages these rewards to guide the decoding process, dynamically tailoring the base model's predictions to personalized preferences. Extensive experimental results demonstrate that PAD not only outperforms existing training-based alignment methods in terms of aligning with diverse preferences but also shows significant generalizability to preferences unseen during training and scalability across different base models. This work advances the capability of LLMs to meet user needs in real-time applications, presenting a substantial step forward in personalized LLM alignment.
翻译:与个性化偏好对齐——这些偏好因文化、教育和政治差异而显著不同——由于传统对齐方法的计算成本高和数据需求大,构成了重大挑战。为此,本文提出了解码时个性化对齐(PAD),这是一种新颖的框架,旨在推理阶段将大语言模型输出与多样化的个性化偏好对齐,无需额外训练。通过引入一种独特的个性化奖励建模策略,该框架将文本生成过程与个性化偏好解耦,从而促进了可泛化的词元级个性化奖励的生成。PAD算法利用这些奖励来指导解码过程,动态地将基础模型的预测调整至个性化偏好。大量的实验结果表明,PAD不仅在适应多样化偏好方面优于现有的基于训练的对齐方法,而且对训练中未见过的偏好表现出显著的泛化能力,并能在不同的基础模型之间实现良好的可扩展性。这项工作提升了大语言模型在实时应用中满足用户需求的能力,在个性化大语言模型对齐领域迈出了重要一步。