Recently, foundation models, particularly large language models (LLMs), have demonstrated an impressive ability to adapt to various tasks by fine-tuning diverse instruction data. Notably, federated foundation models (FedFM) emerge as a privacy preservation method to fine-tune models collaboratively under federated learning (FL) settings by leveraging many distributed datasets with non-IID data. To alleviate communication and computation overhead, parameter-efficient methods are introduced for efficiency, and some research adapted personalization methods to FedFM for better user preferences alignment. However, a critical gap in existing research is the neglect of test-time distribution shifts in real-world applications, and conventional methods for test-time distribution shifts in personalized FL are less effective for FedFM due to their failure to adapt to complex distribution shift scenarios and the requirement to train all parameters. To bridge this gap, we refine the setting in FedFM, termed test-time personalization, which aims to learn personalized federated foundation models on clients while effectively handling test-time distribution shifts simultaneously. To address challenges in this setting, we explore a simple yet effective solution, a Federated Dual-Personalizing Adapter (FedDPA) architecture. By co-working with a foundation model, a global adapter and a local adapter jointly tackle the test-time distribution shifts and client-specific personalization. Additionally, we introduce an instance-wise dynamic weighting mechanism that dynamically integrates the global and local adapters for each test instance during inference, facilitating effective test-time personalization. The effectiveness of the proposed method has been evaluated on benchmark datasets across different NLP tasks.
翻译:近年来,基础模型(尤其是大语言模型)通过微调多样化的指令数据,展现出适应各类任务的卓越能力。值得注意的是,联邦基础模型作为一种隐私保护方法,通过利用大量非独立同分布数据的分布式数据集,在联邦学习框架下实现模型的协同微调。为降低通信与计算开销,参数高效方法被引入以提升效率,部分研究还将个性化方法适配至联邦基础模型以更好地满足用户偏好。然而,现有研究存在一个关键空白:忽视了实际应用中的测试时分布偏移问题,且传统个性化联邦学习中应对测试时分布偏移的方法在联邦基础模型场景下效果有限——因其难以适应复杂的分布偏移场景且需要训练全部参数。为填补这一空白,我们细化联邦基础模型的设定,提出"测试时个性化"场景,其目标是在客户端学习个性化联邦基础模型的同时,有效处理测试时分布偏移。针对该场景的挑战,我们探索了一种简洁而有效的解决方案:联邦双重个性化适配器架构。通过与基础模型协同工作,全局适配器与局部适配器共同应对测试时分布偏移和客户端特定个性化需求。此外,我们提出一种实例级动态加权机制,在推理过程中为每个测试实例动态整合全局与局部适配器,从而实现有效的测试时个性化。所提方法已在不同自然语言处理任务的基准数据集上验证了其有效性。