Large Language Models (LLMs), known for their versatility in textual data, are increasingly being explored for their potential to enhance medical image segmentation, a crucial task for accurate diagnostic imaging. This study explores enhancing Vision Transformers (ViTs) for medical image segmentation by integrating pre-trained LLM transformer blocks. Our approach, which incorporates a frozen LLM transformer block into the encoder of a ViT-based model, leads to substantial improvements in segmentation performance across various medical imaging modalities. We propose a Hybrid Attention Mechanism that combines global and local feature learning with a Multi-Scale Fusion Block for aggregating features across different scales. The enhanced model shows significant performance gains, including an average Dice score increase from 0.74 to 0.79 and improvements in accuracy, precision, and the Jaccard Index. These results demonstrate the effectiveness of LLM-based transformers in refining medical image segmentation, highlighting their potential to significantly boost model accuracy and robustness. The source code and our implementation are available at: https://bit.ly/3zf2CVs
翻译:大语言模型(LLMs)以其在文本数据处理的通用性著称,其在医学图像分割这一关键诊断成像任务中的增强潜力正日益受到关注。本研究通过集成预训练的LLM Transformer模块,探索增强Vision Transformer(ViT)在医学图像分割中的性能。我们的方法将冻结的LLM Transformer模块嵌入基于ViT模型的编码器中,从而在多种医学成像模态上实现了分割性能的显著提升。我们提出了一种混合注意力机制,该机制结合了全局与局部特征学习,并通过多尺度融合模块聚合不同尺度的特征。增强后的模型展现出显著的性能提升,包括平均Dice分数从0.74提高至0.79,并在准确率、精确率和Jaccard指数等指标上均有所改善。这些结果证明了基于LLM的Transformer在优化医学图像分割方面的有效性,凸显了其显著提升模型精度与鲁棒性的潜力。源代码与实现已公开于:https://bit.ly/3zf2CVs