Federated learning (FL) has obtained tremendous progress in providing collaborative training solutions for distributed data silos with privacy guarantees. However, few existing works explore a more realistic scenario where the clients hold multiple data modalities. In this paper, we aim to solve a novel challenge in multi-modal federated learning (MFL) -- modality missing -- the clients may lose part of the modalities in their local data sets. To tackle the problems, we propose a novel multi-modal federated learning method, Federated Multi-modal contrastiVe training with Pre-trained completion (FedMVP), which integrates the large-scale pre-trained models to enhance the federated training. In the proposed FedMVP framework, each client deploys a large-scale pre-trained model with frozen parameters for modality completion and representation knowledge transfer, enabling efficient and robust local training. On the server side, we utilize generated data to uniformly measure the representation similarity among the uploaded client models and construct a graph perspective to aggregate them according to their importance in the system. We demonstrate that the model achieves superior performance over two real-world image-text classification datasets and is robust to the performance degradation caused by missing modality.
翻译:联邦学习(FL)在提供具有隐私保障的分布式数据孤岛协同训练解决方案方面取得了巨大进展。然而,现有研究很少探讨一个更现实的场景:客户端持有多种数据模态。本文旨在解决多模态联邦学习(MFL)中的一个新挑战——模态缺失,即客户端本地数据集中可能丢失部分模态。为解决该问题,我们提出了一种新颖的多模态联邦学习方法——基于预训练补全的联邦多模态对比训练(FedMVP),该方法整合大规模预训练模型以增强联邦训练。在所提出的FedMVP框架中,每个客户端部署一个参数冻结的大规模预训练模型,用于模态补全和表征知识迁移,从而实现高效鲁棒的本地训练。在服务器端,我们利用生成数据统一度量上传客户端模型间的表征相似度,并构建图视角以根据其在系统中的重要性进行聚合。实验证明,该模型在两个真实世界图文分类数据集上实现了优越性能,并对模态缺失导致的性能下降具有鲁棒性。