Automated medical image segmentation is becoming increasingly crucial to modern clinical practice, driven by the growing demand for precise diagnosis, the push towards personalized treatment plans, and the advancements in machine learning algorithms, especially the incorporation of deep learning methods. While convolutional neural networks (CNN) have been prevalent among these methods, the remarkable potential of Transformer-based models for computer vision tasks is gaining more acknowledgment. To harness the advantages of both CNN-based and Transformer-based models, we propose a simple yet effective UNet-Transformer (seUNet-Trans) model for medical image segmentation. In our approach, the UNet model is designed as a feature extractor to generate multiple feature maps from the input images, then the maps are propagated into a bridge layer, which is introduced to sequentially connect the UNet and the Transformer. In this stage, we approach the pixel-level embedding technique without position embedding vectors, aiming to make the model more efficient. Moreover, we apply spatial-reduction attention in the Transformer to reduce the computational/memory overhead. By leveraging the UNet architecture and the self-attention mechanism, our model not only retains the preservation of both local and global context information but also is capable of capturing long-range dependencies between input elements. The proposed model is extensively experimented on seven medical image segmentation datasets including polyp segmentation to demonstrate its efficacy. Comparison with several state-of-the-art segmentation models on these datasets shows the superior performance of our proposed seUNet-Trans network.
翻译:自动化医学图像分割在现代临床实践中日益重要,这得益于精确诊断需求的增长、个性化治疗方案推进以及机器学习算法(尤其是深度学习方法的整合)的进步。尽管卷积神经网络(CNN)在这些方法中占据主导地位,但基于Transformer的模型在计算机视觉任务中的显著潜力正逐渐得到认可。为融合基于CNN和基于Transformer两种模型的优势,我们提出了一种简单而有效的UNet-Transformer(seUNet-Trans)模型用于医学图像分割。在该方法中,UNet模型被设计为特征提取器,从输入图像生成多个特征图,随后这些特征图被传入一个桥接层——该层被引入以顺序连接UNet和Transformer。在此阶段,我们采用无需位置嵌入向量的像素级嵌入技术,旨在提升模型效率。此外,我们在Transformer中应用空间降维注意力机制,以减少计算/内存开销。通过利用UNet架构与自注意力机制,我们的模型不仅能保留局部和全局上下文信息,还能捕获输入元素之间的长程依赖关系。所提模型在包括息肉分割在内的七个医学图像分割数据集上进行了广泛实验以验证其有效性。与这些数据集上多个最先进分割模型的比较表明,我们提出的seUNet-Trans网络具有优越性能。