Medical image segmentation is pivotal in healthcare, enhancing diagnostic accuracy, informing treatment strategies, and tracking disease progression. This process allows clinicians to extract critical information from visual data, enabling personalized patient care. However, developing neural networks for segmentation remains challenging, especially when preserving image resolution, which is essential in detecting subtle details that influence diagnoses. Moreover, the lack of transparency in these deep learning models has slowed their adoption in clinical practice. Efforts in model interpretability are increasingly focused on making these models' decision-making processes more transparent. In this paper, we introduce MAPUNetR, a novel architecture that synergizes the strengths of transformer models with the proven U-Net framework for medical image segmentation. Our model addresses the resolution preservation challenge and incorporates attention maps highlighting segmented regions, increasing accuracy and interpretability. Evaluated on the BraTS 2020 dataset, MAPUNetR achieved a dice score of 0.88 and a dice coefficient of 0.92 on the ISIC 2018 dataset. Our experiments show that the model maintains stable performance and potential as a powerful tool for medical image segmentation in clinical practice.
翻译:医学图像分割在医疗健康领域至关重要,它能提升诊断准确性、指导治疗策略并追踪疾病进展。该过程使临床医生能够从视觉数据中提取关键信息,从而实现个性化诊疗。然而,开发用于分割任务的神经网络仍面临挑战,尤其是在保持图像分辨率方面——这对于检测影响诊断的细微特征至关重要。此外,深度学习模型缺乏透明度的问题也延缓了其在临床实践中的应用。当前模型可解释性研究正日益聚焦于提升决策过程的透明性。本文提出MAPUNetR,一种创新架构,通过将Transformer模型的优势与经过验证的U-Net框架相融合,实现医学图像分割。该模型不仅解决了分辨率保持的难题,还引入注意力图机制以突出显示分割区域,从而同步提升准确性与可解释性。在BraTS 2020数据集上的评估显示,MAPUNetR获得0.88的Dice分数;在ISIC 2018数据集上则达到0.92的Dice系数。实验表明,该模型性能稳定,具备成为临床实践中强大医学图像分割工具的潜力。