Foundation models open up new possibilities for the use of AI in healthcare. However, even when pre-trained on health data, they still need to be fine-tuned for specific downstream tasks. Furthermore, although foundation models reduce the amount of training data required to achieve good performance, obtaining sufficient data is still a challenge. This is due, in part, to restrictions on sharing and aggregating data from different sources to protect patients' privacy. One possible solution to this is to fine-tune foundation models via federated learning across multiple participating clients (i.e., hospitals, clinics, etc.). In this work, we propose a new personalized federated fine-tuning method that learns orthogonal LoRA adapters to disentangle general and client-specific knowledge, enabling each client to fully exploit both their own data and the data of others. Our preliminary results on real-world federated medical imaging tasks demonstrate that our approach is competitive against current federated fine-tuning methods.
翻译:基础模型为人工智能在医疗健康领域的应用开辟了新的可能性。然而,即使在健康数据上进行预训练后,它们仍需要针对特定的下游任务进行微调。此外,尽管基础模型减少了达到良好性能所需的训练数据量,获取足够的数据仍然是一个挑战。这部分是由于为了保护患者隐私,对共享和聚合来自不同来源的数据存在限制。一种可能的解决方案是通过跨多个参与客户端(即医院、诊所等)的联邦学习来微调基础模型。在本工作中,我们提出了一种新的个性化联邦微调方法,该方法学习正交的LoRA适配器以解耦通用知识和客户端特定知识,使每个客户端能够充分利用其自身数据和其他客户端的数据。我们在真实世界联邦医学成像任务上的初步结果表明,我们的方法相较于当前联邦微调方法具有竞争力。