Recent research has shown the potential of deep learning in multi-parametric MRI-based visual pathway (VP) segmentation. However, obtaining labeled data for training is laborious and time-consuming. Therefore, it is crucial to develop effective algorithms in situations with limited labeled samples. In this work, we propose a label-efficient deep learning method with self-ensembling (LESEN). LESEN incorporates supervised and unsupervised losses, enabling the student and teacher models to mutually learn from each other, forming a self-ensembling mean teacher framework. Additionally, we introduce a reliable unlabeled sample selection (RUSS) mechanism to further enhance LESEN's effectiveness. Our experiments on the human connectome project (HCP) dataset demonstrate the superior performance of our method when compared to state-of-the-art techniques, advancing multimodal VP segmentation for comprehensive analysis in clinical and research settings. The implementation code will be available at: https://github.com/aldiak/Semi-Supervised-Multimodal-Visual-Pathway- Delineation.
翻译:近期研究表明,深度学习方法在多参数MRI视觉通路分割中具有潜力。然而,获取用于训练的标注数据既费力又耗时。因此,在标注样本有限的情况下开发高效算法至关重要。本文提出一种结合自集成的标签高效深度学习方法(LESEN)。LESEN融合监督损失与非监督损失,使师生模型能够相互学习,形成自集成平均教师框架。此外,我们引入可靠无标注样本选择(RUSS)机制进一步提升LESEN性能。在人类连接组计划(HCP)数据集上的实验表明,与现有最优技术相比,该方法展现出优越性能,推动了临床与研究场景中多模态视觉通路分割的综合分析。实现代码将于https://github.com/aldiak/Semi-Supervised-Multimodal-Visual-Pathway-Delineation公开。