3D medical image analysis is essential for modern healthcare, yet traditional task-specific models are inadequate due to limited generalizability across diverse clinical scenarios. Multimodal large language models (MLLMs) offer a promising solution to these challenges. However, existing MLLMs have limitations in fully leveraging the rich, hierarchical information embedded in 3D medical images. Inspired by clinical practice, where radiologists focus on both 3D spatial structure and 2D planar content, we propose Med-2E3, a 3D medical MLLM that integrates a dual 3D-2D encoder architecture. To aggregate 2D features effectively, we design a Text-Guided Inter-Slice (TG-IS) scoring module, which scores the attention of each 2D slice based on slice contents and task instructions. To the best of our knowledge, Med-2E3 is the first MLLM to integrate both 3D and 2D features for 3D medical image analysis. Experiments on large-scale, open-source 3D medical multimodal datasets demonstrate that TG-IS exhibits task-specific attention distribution and significantly outperforms current state-of-the-art models. The code is available at: https://github.com/MSIIP/Med-2E3
翻译:三维医学图像分析对于现代医疗保健至关重要,然而传统的任务专用模型由于在不同临床场景中的泛化能力有限而显得不足。多模态大语言模型为应对这些挑战提供了一种有前景的解决方案。然而,现有MLLM在充分利用三维医学图像中蕴含的丰富层次信息方面存在局限。受临床实践的启发——放射科医生同时关注三维空间结构和二维平面内容,我们提出了Med-2E3,一种集成双3D-2D编码器架构的三维医学MLLM。为了有效聚合二维特征,我们设计了一个文本引导的切片间评分模块,该模块根据切片内容和任务指令对每个二维切片的注意力进行评分。据我们所知,Med-2E3是首个为三维医学图像分析集成3D与2D特征的MLLM。在大规模开源三维医学多模态数据集上的实验表明,TG-IS展现出任务特定的注意力分布,并显著优于当前最先进的模型。代码发布于:https://github.com/MSIIP/Med-2E3