Gliomas are brain tumours that stand out for their highly lethal and aggressive nature, which demands a precise approach in their diagnosis. Medical image segmentation plays a crucial role in the evaluation and follow-up of these tumours, allowing specialists to analyse their morphology. However, existing methods for automatic glioma segmentation often lack generalization capability across other brain tumour domains, require extensive computational resources, or fail to fully utilize the multi-parametric MRI (mp-MRI) data used to delineate them. In this work, we introduce GBT-SAM, a novel Generalizable Brain Tumour (GBT) framework that extends the Segment Anything Model (SAM) to brain tumour segmentation tasks. Our method employs a two-step training protocol: first, fine-tuning the patch embedding layer to process the entire mp-MRI modalities, and second, incorporating parameter-efficient LoRA blocks and a Depth-Condition block into the Vision Transformer (ViT) to capture inter-slice correlations. GBT-SAM achieves state-of-the-art performance on the Adult Glioma dataset (Dice Score of $93.54$) while demonstrating robust generalization across Meningioma, Pediatric Glioma, and Sub-Saharan Glioma datasets. Furthermore, GBT-SAM uses less than 6.5M trainable parameters, thus offering an efficient solution for brain tumour segmentation. \\ Our code and models are available at https://github.com/vpulab/med-sam-brain .
翻译:胶质瘤是一种以高度致命性和侵袭性为特征的脑肿瘤,其诊断需要精确的方法。医学图像分割在这些肿瘤的评估和随访中起着至关重要的作用,使专家能够分析其形态。然而,现有的自动胶质瘤分割方法通常缺乏跨其他脑肿瘤领域的泛化能力,需要大量的计算资源,或者未能充分利用用于描绘肿瘤的多参数MRI(mp-MRI)数据。在本工作中,我们提出了GBT-SAM,这是一种新颖的可泛化脑肿瘤(GBT)框架,它将Segment Anything Model(SAM)扩展到脑肿瘤分割任务中。我们的方法采用两步训练协议:首先,微调图像块嵌入层以处理完整的mp-MRI模态;其次,在Vision Transformer(ViT)中引入参数高效的LoRA模块和一个深度条件模块,以捕获切片间的相关性。GBT-SAM在成人胶质瘤数据集上达到了最先进的性能(Dice评分为$93.54$),同时在脑膜瘤、儿童胶质瘤和撒哈拉以南胶质瘤数据集上表现出强大的泛化能力。此外,GBT-SAM使用的可训练参数少于6.5M,从而为脑肿瘤分割提供了一个高效的解决方案。\\ 我们的代码和模型可在 https://github.com/vpulab/med-sam-brain 获取。