The holy grail of LLM personalization is a single LLM for each user, perfectly aligned with that user's preferences. However, maintaining a separate LLM per user is impractical due to constraints on compute, memory, and system complexity. We address this challenge by developing a principled method for selecting a small portfolio of LLMs that captures representative behaviors across heterogeneous users. We model user preferences across multiple traits (e.g., safety, humor, brevity) through a multi-dimensional weight vector. Given reward functions across these dimensions, our algorithm PALM (Portfolio of Aligned LLMs) generates a small portfolio of LLMs such that, for any weight vector, the portfolio contains a near-optimal LLM for the corresponding scalarized objective. To the best of our knowledge, this is the first result that provides theoretical guarantees on both the size and approximation quality of LLM portfolios for personalization. It characterizes the trade-off between system cost and personalization, as well as the diversity of LLMs required to cover the landscape of user preferences. We provide empirical results that validate these guarantees and demonstrate greater output diversity over common baselines.
翻译:大语言模型(LLM)个性化的终极目标是每个用户拥有一个与其偏好完美对齐的专属LLM。然而,由于计算、内存和系统复杂性的限制,为每个用户维护一个独立的LLM并不现实。我们通过开发一种原则性方法来应对这一挑战,该方法选择一个小型LLM组合,以捕捉异构用户中的代表性行为。我们将用户在多个特质(例如安全性、幽默感、简洁性)上的偏好建模为多维权重向量。给定这些维度上的奖励函数,我们的算法PALM(对齐LLM组合)生成一个小型LLM组合,使得对于任何权重向量,该组合中都存在一个对应标量化目标的近似最优LLM。据我们所知,这是首个为LLM个性化中组合规模与近似质量提供理论保证的研究。它刻画了系统成本与个性化之间的权衡,以及覆盖用户偏好空间所需的LLM多样性。我们提供的实证结果验证了这些保证,并展示了相比常见基准方法更高的输出多样性。