Surgical image segmentation is essential for robot-assisted surgery and intraoperative guidance. However, existing methods are constrained to predefined categories, produce one-shot predictions without adaptive refinement, and lack mechanisms for clinician interaction. We propose IR-SIS, an iterative refinement system for surgical image segmentation that accepts natural language descriptions. IR-SIS leverages a fine-tuned SAM3 for initial segmentation, employs a Vision-Language Model to detect instruments and assess segmentation quality, and applies an agentic workflow that adaptively selects refinement strategies. The system supports clinician-in-the-loop interaction through natural language feedback. We also construct a multi-granularity language-annotated dataset from EndoVis2017 and EndoVis2018 benchmarks. Experiments demonstrate state-of-the-art performance on both in-domain and out-of-distribution data, with clinician interaction providing additional improvements. Our work establishes the first language-based surgical segmentation framework with adaptive self-refinement capabilities.
翻译:手术图像分割对于机器人辅助手术和术中引导至关重要。然而,现有方法局限于预定义类别,产生一次性预测而缺乏自适应优化能力,且缺少与临床医生交互的机制。我们提出了IR-SIS,一种用于手术图像分割的迭代优化系统,该系统可接受自然语言描述。IR-SIS利用微调后的SAM3进行初始分割,采用视觉语言模型检测手术器械并评估分割质量,并应用一种能自适应选择优化策略的智能体工作流程。该系统通过自然语言反馈支持临床医生在环交互。我们还基于EndoVis2017和EndoVis2018基准构建了多粒度语言标注数据集。实验表明,该系统在领域内数据和分布外数据上均实现了最先进的性能,临床医生交互能带来额外提升。我们的工作建立了首个具有自适应自优化能力的基于语言的手术分割框架。