We introduce FedEvPrompt, a federated learning approach that integrates principles of evidential deep learning, prompt tuning, and knowledge distillation for distributed skin lesion classification. FedEvPrompt leverages two sets of prompts: b-prompts (for low-level basic visual knowledge) and t-prompts (for task-specific knowledge) prepended to frozen pre-trained Vision Transformer (ViT) models trained in an evidential learning framework to maximize class evidences. Crucially, knowledge sharing across federation clients is achieved only through knowledge distillation on attention maps generated by the local ViT models, ensuring enhanced privacy preservation compared to traditional parameter or synthetic image sharing methodologies. FedEvPrompt is optimized within a round-based learning paradigm, where each round involves training local models followed by attention maps sharing with all federation clients. Experimental validation conducted in a real distributed setting, on the ISIC2019 dataset, demonstrates the superior performance of FedEvPrompt against baseline federated learning algorithms and knowledge distillation methods, without sharing model parameters. In conclusion, FedEvPrompt offers a promising approach for federated learning, effectively addressing challenges such as data heterogeneity, imbalance, privacy preservation, and knowledge sharing.
翻译:本文提出FedEvPrompt,一种融合证据深度学习、提示调优与知识蒸馏原理的联邦学习方法,用于分布式皮肤病变分类。FedEvPrompt采用两组提示符:b-提示符(用于底层基础视觉知识)和t-提示符(用于任务特定知识),将其添加至冻结的预训练Vision Transformer(ViT)模型前端,该模型在证据学习框架中训练以最大化类别证据。关键创新在于,联邦客户端间的知识共享仅通过对本地ViT模型生成的注意力图进行知识蒸馏实现,相比传统的参数共享或合成图像共享方法,显著提升了隐私保护能力。FedEvPrompt在轮次式学习范式中进行优化,每轮包含本地模型训练及向所有联邦客户端共享注意力图的步骤。在ISIC2019数据集上的真实分布式环境实验表明,FedEvPrompt在不共享模型参数的情况下,其性能优于基线联邦学习算法与知识蒸馏方法。综上所述,FedEvPrompt为联邦学习提供了一种前景广阔的方法,能有效应对数据异构性、类别不平衡、隐私保护及知识共享等挑战。