Personalization in Large Language Models (LLMs) often relies on user-specific soft prompts. However, these prompts become obsolete when the foundation model is upgraded, necessitating costly, full-scale retraining. To overcome this limitation, we propose the Prompt-level User Migration Adapter (PUMA), a lightweight framework to efficiently migrate personalized prompts across incompatible models. PUMA utilizes a parameter-efficient adapter to bridge the semantic gap, combined with a group-based user selection strategy to significantly reduce training costs. Experiments on three large-scale datasets show our method matches or even surpasses the performance of retraining from scratch, reducing computational cost by up to 98%. The framework demonstrates strong generalization across diverse model architectures and robustness in advanced scenarios like chained and aggregated migrations, offering a practical path for the sustainable evolution of personalized AI by decoupling user assets from the underlying models.
翻译:大型语言模型(LLMs)的个性化通常依赖于用户特定的软提示词。然而,当基础模型升级时,这些提示词会变得过时,从而需要进行成本高昂的全面重新训练。为克服这一限制,我们提出了提示词级用户迁移适配器(Prompt-level User Migration Adapter, PUMA),这是一个轻量级框架,用于在不兼容的模型之间高效迁移个性化提示词。PUMA利用参数高效的适配器来弥合语义鸿沟,并结合基于组的用户选择策略,以显著降低训练成本。在三个大规模数据集上的实验表明,我们的方法达到甚至超越了从头开始重新训练的性能,同时将计算成本降低了高达98%。该框架在不同模型架构上表现出强大的泛化能力,并在链式迁移和聚合迁移等高级场景中展现了鲁棒性,通过将用户资产与底层模型解耦,为个性化人工智能的可持续演进提供了一条实用路径。