Recent advancements in surgical computer vision have been driven by vision-only models, which lack language semantics, relying on manually annotated videos to predict fixed object categories. This limits their generalizability to unseen surgical procedures and tasks. We propose leveraging surgical video lectures from e-learning platforms to provide effective vision and language supervisory signals for multi-modal representation learning, bypassing manual annotations. We address surgery-specific linguistic challenges using multiple automatic speech recognition systems for text transcriptions. We introduce SurgVLP - Surgical Vision Language Pre-training - a novel method for multi-modal representation learning. SurgVLP employs a new contrastive learning objective, aligning video clip embeddings with corresponding multiple text embeddings in a joint latent space. We demonstrate the representational capability of this space through several vision-and-language surgical tasks and vision-only tasks specific to surgery. Unlike current fully supervised approaches, SurgVLP adapts to different surgical procedures and tasks without specific fine-tuning, achieving zero-shot adaptation to tasks such as surgical tool, phase, and triplet recognition without manual annotation. These results highlight the transferability and versatility of the learned multi-modal representations in surgical video analysis. The code is available at https://github.com/CAMMA-public/SurgVLP
翻译:近期手术计算机视觉的进展主要依赖于纯视觉模型,这些模型缺乏语言语义,依靠手动标注的视频来预测固定的对象类别。这限制了其对未见手术流程和任务的泛化能力。我们提出利用电子学习平台上的手术视频讲座,为多模态表示学习提供有效的视觉与语言监督信号,从而绕过手动标注。我们采用多种自动语音识别系统处理文本转录,以应对手术特有的语言挑战。我们引入SurgVLP——手术视觉语言预训练——一种新颖的多模态表示学习方法。SurgVLP采用新的对比学习目标,将视频片段嵌入与对应的多个文本嵌入在联合潜在空间中对齐。我们通过多项手术相关的视觉-语言任务及纯视觉任务,展示了该空间的表示能力。与当前全监督方法不同,SurgVLP无需特定微调即可适应不同的手术流程和任务,在手术工具识别、阶段识别及三元组识别等任务上实现了无需手动标注的零样本适应。这些结果凸显了所学多模态表示在手术视频分析中的可迁移性与通用性。代码发布于https://github.com/CAMMA-public/SurgVLP。