Purpose: Accurate 3D hand pose estimation supports surgical applications such as skill assessment, robot-assisted interventions, and geometry-aware workflow analysis. However, surgical environments pose severe challenges, including intense and localized lighting, frequent occlusions by instruments or staff, and uniform hand appearance due to gloves, combined with a scarcity of annotated datasets for reliable model training. Method: We propose a robust multi-view pipeline for 3D hand pose estimation in surgical contexts that requires no domain-specific fine-tuning and relies solely on off-the-shelf pretrained models. The pipeline integrates reliable person detection, whole-body pose estimation, and state-of-the-art 2D hand keypoint prediction on tracked hand crops, followed by a constrained 3D optimization. In addition, we introduce a novel surgical benchmark dataset comprising over 68,000 frames and 3,000 manually annotated 2D hand poses with triangulated 3D ground truth, recorded in a replica operating room under varying levels of scene complexity. Results: Quantitative experiments demonstrate that our method consistently outperforms baselines, achieving a 31% reduction in 2D mean joint error and a 76% reduction in 3D mean per-joint position error. Conclusion: Our work establishes a strong baseline for 3D hand pose estimation in surgery, providing both a training-free pipeline and a comprehensive annotated dataset to facilitate future research in surgical computer vision.
翻译:目的:精确的3D手部姿态估计可为技能评估、机器人辅助干预及几何感知工作流分析等外科应用提供支持。然而,手术环境带来严峻挑战:包括强烈且局部化的照明、器械或医护人员造成的频繁遮挡、因手套导致的单一化手部外观,以及可用于可靠模型训练的标注数据稀缺。方法:我们提出一种适用于手术场景的鲁棒性多视角3D手部姿态估计流程,该流程无需领域特定微调,仅依赖现成的预训练模型。该流程整合了可靠的人员检测、全身姿态估计、基于追踪手部区域裁剪图的最先进2D手部关键点预测,以及后续的约束性3D优化。此外,我们引入一个新颖的手术基准数据集,包含超过68,000帧图像及3,000个手动标注的2D手部姿态(附带三角测量生成的3D真实值),数据在模拟手术室中采集,涵盖不同复杂度的场景条件。结果:定量实验表明,我们的方法始终优于基线模型,实现了2D平均关节误差降低31%,3D平均单关节位置误差降低76%。结论:本研究为手术场景下的3D手部姿态估计建立了强基准,通过提供免训练流程与全面标注数据集,为未来手术计算机视觉研究奠定基础。