Federated learning has emerged as a compelling paradigm for medical image segmentation, particularly in light of increasing privacy concerns. However, most of the existing research relies on relatively stringent assumptions regarding the uniformity and completeness of annotations across clients. Contrary to this, this paper highlights a prevalent challenge in medical practice: incomplete annotations. Such annotations can introduce incorrectly labeled pixels, potentially undermining the performance of neural networks in supervised learning. To tackle this issue, we introduce a novel solution, named FedIA. Our insight is to conceptualize incomplete annotations as noisy data (i.e., low-quality data), with a focus on mitigating their adverse effects. We begin by evaluating the completeness of annotations at the client level using a designed indicator. Subsequently, we enhance the influence of clients with more comprehensive annotations and implement corrections for incomplete ones, thereby ensuring that models are trained on accurate data. Our method's effectiveness is validated through its superior performance on two extensively used medical image segmentation datasets, outperforming existing solutions. The code is available at https://github.com/HUSTxyy/FedIA.
翻译:联邦学习已成为医学图像分割领域一种极具吸引力的范式,尤其是在隐私问题日益受到关注的背景下。然而,现有研究大多依赖于关于各客户端间标注一致性与完整性的相对严格假设。与此相反,本文强调了医疗实践中一个普遍存在的挑战:不完整的标注。此类标注可能引入错误标记的像素,从而潜在地损害监督学习中神经网络的性能。为解决此问题,我们提出了一种名为FedIA的新颖解决方案。我们的核心思路是将不完整标注概念化为噪声数据(即低质量数据),并着重减轻其负面影响。我们首先使用设计的指标在客户端层面评估标注的完整性。随后,我们增强标注更全面客户端的影响力,并对不完整的标注实施校正,从而确保模型在准确数据上进行训练。我们的方法在两种广泛使用的医学图像分割数据集上均展现出优越性能,其有效性由此得到验证,且表现优于现有解决方案。代码发布于 https://github.com/HUSTxyy/FedIA。