Medical image segmentation is evolving from task-specific models toward generalizable frameworks. Recent research leverages Multi-modal Large Language Models (MLLMs) as autonomous agents, employing reinforcement learning with verifiable reward (RLVR) to orchestrate specialized tools like the Segment Anything Model (SAM). However, these approaches often rely on single-turn, rigid interaction strategies and lack process-level supervision during training, which hinders their ability to fully exploit the dynamic potential of interactive tools and leads to redundant actions. To bridge this gap, we propose MedSAM-Agent, a framework that reformulates interactive segmentation as a multi-step autonomous decision-making process. First, we introduce a hybrid prompting strategy for expert-curated trajectory generation, enabling the model to internalize human-like decision heuristics and adaptive refinement strategies. Furthermore, we develop a two-stage training pipeline that integrates multi-turn, end-to-end outcome verification with a clinical-fidelity process reward design to promote interaction parsimony and decision efficiency. Extensive experiments across 6 medical modalities and 21 datasets demonstrate that MedSAM-Agent achieves state-of-the-art performance, effectively unifying autonomous medical reasoning with robust, iterative optimization. Code is available \href{https://github.com/CUHK-AIM-Group/MedSAM-Agent}{here}.
翻译:医学图像分割正从任务特定模型向通用化框架演进。近期研究利用多模态大语言模型作为自主智能体,采用带可验证奖励的强化学习来协调如Segment Anything Model等专用工具。然而,这些方法通常依赖单轮、僵化的交互策略,且在训练过程中缺乏过程级监督,这限制了其充分挖掘交互工具动态潜力的能力,并导致冗余操作。为弥补这一不足,我们提出MedSAM-Agent框架,将交互式分割重新定义为多步骤自主决策过程。首先,我们引入一种混合提示策略用于专家引导的轨迹生成,使模型能够内化类人决策启发式与自适应优化策略。此外,我们开发了一个两阶段训练流程,该流程将多轮端到端结果验证与临床保真度的过程奖励设计相结合,以促进交互简洁性与决策效率。在6种医学模态和21个数据集上的大量实验表明,MedSAM-Agent实现了最先进的性能,有效将自主医学推理与鲁棒的迭代优化相统一。代码发布于\href{https://github.com/CUHK-AIM-Group/MedSAM-Agent}{此处}。