This work presents a framework for monocular 6D pose estimation of surgical instruments in open surgery, addressing challenges such as object articulations, specularity, occlusions, and synthetic-to-real domain adaptation. The proposed approach consists of three main components: $(1)$ synthetic data generation pipeline that incorporates 3D scanning of surgical tools with articulation rigging and physically-based rendering; $(2)$ a tailored pose estimation framework combining tool detection with pose and articulation estimation; and $(3)$ a training strategy on synthetic and real unannotated video data, employing domain adaptation with automatically generated pseudo-labels. Evaluations conducted on real data of open surgery demonstrate the good performance and real-world applicability of the proposed framework, highlighting its potential for integration into medical augmented reality and robotic systems. The approach eliminates the need for extensive manual annotation of real surgical data.
翻译:本研究提出了一种用于开放式手术中手术器械单目六自由度姿态估计的框架,旨在解决物体关节结构、镜面反射、遮挡以及合成到真实域适应等挑战。所提出的方法包含三个主要组成部分:$(1)$ 合成数据生成流程,整合了手术器械的三维扫描、关节绑定及基于物理的渲染技术;$(2)$ 定制的姿态估计框架,将器械检测与姿态及关节状态估计相结合;$(3)$ 基于合成数据与未标注真实视频数据的训练策略,采用自动生成伪标签的域适应方法。在真实开放式手术数据上进行的评估表明,该框架具有良好的性能与实际应用价值,凸显了其融入医疗增强现实与机器人系统的潜力。该方法无需对真实手术数据进行大量人工标注。