Federated Learning (FL) is a pioneering approach in distributed machine learning, enabling collaborative model training across multiple clients while retaining data privacy. However, the inherent heterogeneity due to imbalanced resource representations across multiple clients poses significant challenges, often introducing bias towards the majority class. This issue is particularly prevalent in healthcare settings, where hospitals acting as clients share medical images. To address class imbalance and reduce bias, we propose a co-distillation driven framework in a federated healthcare setting. Unlike traditional federated setups with a designated server client, our framework promotes knowledge sharing among clients to collectively improve learning outcomes. Our experiments demonstrate that in a federated healthcare setting, co-distillation outperforms other federated methods in handling class imbalance. Additionally, we demonstrate that our framework has the least standard deviation with increasing imbalance while outperforming other baselines, signifying the robustness of our framework for FL in healthcare.
翻译:联邦学习(FL)是一种开创性的分布式机器学习方法,它允许多个客户端在保持数据隐私的同时进行协作式模型训练。然而,由于多个客户端之间资源表示的不平衡所导致的固有异构性带来了重大挑战,通常会引入对多数类的偏向。这一问题在医疗健康场景中尤为普遍,其中作为客户端的医院共享医学影像。为了解决类别不平衡问题并减少偏差,我们提出了一种在联邦医疗健康场景下由协同蒸馏驱动的框架。与具有指定服务器客户端的传统联邦设置不同,我们的框架促进了客户端之间的知识共享,以共同改善学习效果。我们的实验表明,在联邦医疗健康场景中,协同蒸馏在处理类别不平衡方面优于其他联邦学习方法。此外,我们证明随着不平衡程度的增加,我们的框架具有最小的标准差,同时性能优于其他基线方法,这标志着我们的框架在医疗健康领域联邦学习中的鲁棒性。