Computer-assisted surgical (CAS) systems enhance surgical execution and outcomes by providing advanced support to surgeons. These systems often rely on deep learning models trained on complex, challenging-to-annotate data. While synthetic data generation can address these challenges, enhancing the realism of such data is crucial. This work introduces a multi-stage pipeline for generating realistic synthetic data, featuring a fully-fledged surgical simulator that automatically produces all necessary annotations for modern CAS systems. This simulator generates a wide set of annotations that surpass those available in public synthetic datasets. Additionally, it offers a more complex and realistic simulation of surgical interactions, including the dynamics between surgical instruments and deformable anatomical environments, outperforming existing approaches. To further bridge the visual gap between synthetic and real data, we propose a lightweight and flexible image-to-image translation method based on Stable Diffusion (SD) and Low-Rank Adaptation (LoRA). This method leverages a limited amount of annotated data, enables efficient training, and maintains the integrity of annotations generated by our simulator. The proposed pipeline is experimentally validated and can translate synthetic images into images with real-world characteristics, which can generalize to real-world context, thereby improving both training and CAS guidance. The code and the dataset are available at https://github.com/SanoScience/SimuScope.
翻译:计算机辅助手术系统通过为外科医生提供先进支持,能够提升手术执行效果与预后质量。这些系统通常依赖于在复杂且标注困难的数据上训练的深度学习模型。尽管合成数据生成能够应对这些挑战,但提升此类数据的真实感至关重要。本研究提出了一种生成逼真合成数据的多阶段流程,其核心是一个功能完备的手术模拟器,能够自动生成现代计算机辅助手术系统所需的所有标注。该模拟器生成的标注类别远超现有公开合成数据集,同时提供了更复杂、更真实的手术交互模拟,包括手术器械与可变形解剖环境之间的动态相互作用,其性能优于现有方法。为进一步弥合合成数据与真实数据之间的视觉差异,我们提出了一种基于Stable Diffusion与低秩自适应技术的轻量级灵活图像转换方法。该方法利用有限数量的标注数据,实现高效训练,并保持模拟器生成标注的完整性。实验验证表明,所提出的流程能够将合成图像转换为具有真实世界特征的图像,并能泛化至真实场景,从而同时提升训练效果与计算机辅助手术引导性能。代码与数据集发布于https://github.com/SanoScience/SimuScope。