Advancements in computer-assisted surgical procedures heavily rely on accurate visual data interpretation from camera systems used during surgeries. Traditional open-access datasets focusing on surgical procedures are often limited by their small size, typically consisting of fewer than 100 videos with less than 100K images. To address these constraints, a new dataset called Surg-3M has been compiled using a novel aggregation pipeline that collects high-resolution videos from online sources. Featuring an extensive collection of over 4K surgical videos and more than 3 million high-quality images from multiple procedure types, Surg-3M offers a comprehensive resource surpassing existing alternatives in size and scope, including two novel tasks. To demonstrate the effectiveness of this dataset, we present SurgFM, a self-supervised foundation model pretrained on Surg-3M that achieves impressive results in downstream tasks such as surgical phase recognition, action recognition, and tool presence detection. Combining key components from ConvNeXt, DINO, and an innovative augmented distillation method, SurgFM exhibits exceptional performance compared to specialist architectures across various benchmarks. Our experimental results show that SurgFM outperforms state-of-the-art models in multiple downstream tasks, including significant gains in surgical phase recognition (+8.9pp, +4.7pp, and +3.9pp of Jaccard in AutoLaparo, M2CAI16, and Cholec80), action recognition (+3.1pp of mAP in CholecT50) and tool presence detection (+4.6pp of mAP in Cholec80). Moreover, even when using only half of the data, SurgFM outperforms state-of-the-art models in AutoLaparo and achieves state-of-the-art performance in Cholec80. Both Surg-3M and SurgFM have significant potential to accelerate progress towards developing autonomous robotic surgery systems.
翻译:计算机辅助手术技术的进步,在很大程度上依赖于对手术过程中所用摄像系统所采集视觉数据的精确解读。传统专注于手术过程的开放访问数据集通常因其规模较小而受限,通常包含少于100个视频和不足10万张图像。为应对这些限制,我们通过一种新颖的聚合流程,从在线资源收集高分辨率视频,构建了一个名为Surg-3M的新数据集。该数据集收录了超过4K个手术视频和来自多种手术类型的300多万张高质量图像,提供了在规模和范围上超越现有替代方案的全面资源,并包含两项新颖任务。为证明该数据集的有效性,我们提出了SurgFM——一个在Surg-3M上预训练的自监督基础模型,其在手术阶段识别、动作识别和器械存在检测等下游任务中取得了令人瞩目的成果。SurgFM融合了ConvNeXt、DINO的关键组件以及一种创新的增强蒸馏方法,与各类基准测试中的专用架构相比,展现出卓越的性能。我们的实验结果表明,SurgFM在多个下游任务中超越了最先进的模型,包括在手术阶段识别(AutoLaparo、M2CAI16和Cholec80数据集的Jaccard指数分别提升+8.9pp、+4.7pp和+3.9pp)、动作识别(CholecT50数据集的mAP提升+3.1pp)和器械存在检测(Cholec80数据集的mAP提升+4.6pp)方面取得显著增益。此外,即使仅使用一半数据,SurgFM在AutoLaparo上仍优于最先进模型,并在Cholec80上达到了最先进的性能。Surg-3M和SurgFM均具备加速自主机器人手术系统研发进程的巨大潜力。