The recent popularity of foundation models and the pre-train-and-adapt paradigm, where a large-scale model is transferred to downstream tasks, is gaining attention for volumetric medical image segmentation. However, current transfer learning strategies devoted to full fine-tuning for transfer learning may require significant resources and yield sub-optimal results when the labeled data of the target task is scarce. This makes its applicability in real clinical settings challenging since these institutions are usually constrained on data and computational resources to develop proprietary solutions. To address this challenge, we formalize Few-Shot Efficient Fine-Tuning (FSEFT), a novel and realistic scenario for adapting medical image segmentation foundation models. This setting considers the key role of both data- and parameter- efficiency during adaptation. Building on a foundation model pre-trained on open-access CT organ segmentation sources, we propose leveraging Parameter-Efficient Fine-Tuning and black-box Adapters to address such challenges. Furthermore, novel efficient adaptation methodologies are introduced in this work, which include Spatial black-box Adapters that are more appropriate for dense prediction tasks and constrained transductive inference, leveraging task-specific prior knowledge. Our comprehensive transfer learning experiments confirm the suitability of foundation models in medical image segmentation and unveil the limitations of popular fine-tuning strategies in few-shot scenarios.
翻译:近年来,基础模型及“预训练-适配”范式(即将大规模模型迁移至下游任务)在体积医学图像分割领域日益受到关注。然而,当前致力于全参数微调的迁移学习策略在目标任务标注数据稀缺时,可能需要大量资源且效果欠佳。这使得其在真实临床环境中的应用面临挑战,因为医疗机构通常受限于数据和计算资源,难以开发专有解决方案。为应对这一挑战,我们形式化定义了少样本高效微调(FSEFT)这一新颖且贴近实际的医学图像分割基础模型适配场景。该设定同时考虑了数据高效性与参数高效性在适配过程中的关键作用。基于在开放获取CT器官分割数据源上预训练的基础模型,我们提出利用参数高效微调与黑盒适配器来解决上述挑战。此外,本研究引入了新型高效适配方法,包括更适用于密集预测任务的空间黑盒适配器,以及利用任务特定先验知识的约束转导推理。我们全面的迁移学习实验证实了基础模型在医学图像分割中的适用性,并揭示了常用微调策略在少样本场景下的局限性。