Semi-supervised learning for medical image segmentation presents a unique challenge of efficiently using limited labeled data while leveraging abundant unlabeled data. Despite advancements, existing methods often do not fully exploit the potential of the unlabeled data for enhancing model robustness and accuracy. In this paper, we introduce CrossMatch, a novel framework that integrates knowledge distillation with dual perturbation strategies-image-level and feature-level-to improve the model's learning from both labeled and unlabeled data. CrossMatch employs multiple encoders and decoders to generate diverse data streams, which undergo self-knowledge distillation to enhance consistency and reliability of predictions across varied perturbations. Our method significantly surpasses other state-of-the-art techniques in standard benchmarks by effectively minimizing the gap between training on labeled and unlabeled data and improving edge accuracy and generalization in medical image segmentation. The efficacy of CrossMatch is demonstrated through extensive experimental validations, showing remarkable performance improvements without increasing computational costs. Code for this implementation is made available at https://github.com/AiEson/CrossMatch.git.
翻译:半监督学习在医学图像分割中面临独特挑战:如何在充分利用有限标注数据的同时,高效利用海量未标注数据。尽管已有进展,现有方法往往未能充分挖掘未标注数据在提升模型鲁棒性与准确性方面的潜力。本文提出CrossMatch这一新型框架,将知识蒸馏与图像级和特征级双重扰动策略相结合,以增强模型从标注与未标注数据中的学习能力。CrossMatch采用多编码器-解码器架构生成多样化数据流,通过自知识蒸馏提升跨不同扰动预测结果的连贯性与可靠性。该方法通过有效缩小标注与未标注数据训练差距,改善医学图像分割中边缘精度与泛化能力,在标准基准测试中显著超越当前最先进技术。大量实验验证表明,CrossMatch在未增加计算成本的情况下实现了显著的性能提升。本实现代码已开源至https://github.com/AiEson/CrossMatch.git。