In robot-assisted minimally invasive surgery, we introduce the Surgical Action Planning (SAP) task, which generates future action plans from visual inputs to address the absence of intraoperative predictive planning in current intelligent applications. SAP shows great potential for enhancing intraoperative guidance and automating procedures. However, it faces challenges such as understanding instrument-action relationships and tracking surgical progress. Large Language Models (LLMs) show promise in understanding surgical video content but remain underexplored for predictive decision-making in SAP, as they focus mainly on retrospective analysis. Challenges like data privacy, computational demands, and modality-specific constraints further highlight significant research gaps. To tackle these challenges, we introduce LLM-SAP, a Large Language Models-based Surgical Action Planning framework that predicts future actions and generates text responses by interpreting natural language prompts of surgical goals. The text responses potentially support surgical education, intraoperative decision-making, procedure documentation, and skill analysis. LLM-SAP integrates two novel modules: the Near-History Focus Memory Module (NHF-MM) for modeling historical states and the prompts factory for action planning. We evaluate LLM-SAP on our constructed CholecT50-SAP dataset using models like Qwen2.5 and Qwen2-VL, demonstrating its effectiveness in next-action prediction. Pre-trained LLMs are tested zero-shot, and supervised fine-tuning (SFT) with LoRA is implemented to address data privacy concerns. Our experiments show that Qwen2.5-72B-SFT surpasses Qwen2.5-72B with a 19.3% higher accuracy.
翻译:在机器人辅助微创手术中,我们提出了手术行动规划任务,该任务通过视觉输入生成未来的行动规划,以解决当前智能应用中缺乏术中预测性规划的问题。手术行动规划在增强术中引导和实现手术流程自动化方面展现出巨大潜力。然而,该任务面临诸多挑战,例如理解器械与行动之间的关系以及跟踪手术进展。大语言模型在理解手术视频内容方面表现出潜力,但由于其主要用于回顾性分析,在手术行动规划的预测性决策方面仍有待探索。数据隐私、计算需求以及模态特定约束等挑战进一步凸显了显著的研究空白。为应对这些挑战,我们提出了LLM-SAP,一个基于大语言模型的手术行动规划框架,该框架通过解读手术目标的自然语言提示来预测未来行动并生成文本响应。这些文本响应有望支持手术教学、术中决策、流程记录和技能分析。LLM-SAP集成了两个新颖的模块:用于建模历史状态的近历史聚焦记忆模块以及用于行动规划的提示工厂。我们在自建的CholecT50-SAP数据集上使用Qwen2.5和Qwen2-VL等模型评估了LLM-SAP,证明了其在下一行动预测方面的有效性。我们测试了预训练大语言模型的零样本性能,并采用基于LoRA的监督微调以应对数据隐私问题。实验结果表明,经过监督微调的Qwen2.5-72B模型比基础模型准确率高出19.3%。