Augmented reality for laparoscopic liver resection is a visualisation mode that allows a surgeon to localise tumours and vessels embedded within the liver by projecting them on top of a laparoscopic image. Preoperative 3D models extracted from CT or MRI data are registered to the intraoperative laparoscopic images during this process. In terms of 3D-2D fusion, most of the algorithms make use of anatomical landmarks to guide registration. These landmarks include the liver's inferior ridge, the falciform ligament, and the occluding contours. They are usually marked by hand in both the laparoscopic image and the 3D model, which is time-consuming and may contain errors if done by a non-experienced user. Therefore, there is a need to automate this process so that augmented reality can be used effectively in the operating room. We present the Preoperative-to-Intraoperative Laparoscopic Fusion Challenge (P2ILF), held during the Medical Imaging and Computer Assisted Interventions (MICCAI 2022) conference, which investigates the possibilities of detecting these landmarks automatically and using them in registration. The challenge was divided into two tasks: 1) A 2D and 3D landmark detection task and 2) a 3D-2D registration task. The teams were provided with training data consisting of 167 laparoscopic images and 9 preoperative 3D models from 9 patients, with the corresponding 2D and 3D landmark annotations. A total of 6 teams from 4 countries participated, whose proposed methods were evaluated on 16 images and two preoperative 3D models from two patients. All the teams proposed deep learning-based methods for the 2D and 3D landmark segmentation tasks and differentiable rendering-based methods for the registration task. Based on the experimental outcomes, we propose three key hypotheses that determine current limitations and future directions for research in this domain.
翻译:腹腔镜肝切除术中的增强现实是一种可视化模式,允许外科医生通过将肝脏内嵌入的肿瘤和血管投影到腹腔镜图像上,从而进行定位。在此过程中,从CT或MRI数据中提取的术前三维模型被配准到术中腹腔镜图像。在三维-二维融合方面,大多数算法利用解剖标志来指导配准。这些标志包括肝脏下缘、镰状韧带和遮挡轮廓。它们通常由人工在腹腔镜图像和三维模型上标记,这一过程耗时且可能由非经验用户引入错误。因此,需要自动化该过程,以便在手术室中有效使用增强现实。我们介绍了在医学影像与计算机辅助介入(MICCAI 2022)会议期间举办的术前到术中腹腔镜融合挑战赛(P2ILF),该挑战赛探讨了自动检测这些标志并将其用于配准的可能性。挑战赛分为两个任务:1)二维和三维标志检测任务,以及2)三维-二维配准任务。参赛团队获得了训练数据,包括来自9名患者的167张腹腔镜图像和9个术前三维模型,以及相应的二维和三维标志标注。共有来自4个国家的6支队伍参赛,其提出的方法在来自2名患者的16张图像和两个术前三维模型上进行了评估。所有团队均提出了基于深度学习的二维和三维标志分割方法,以及基于可微分渲染的配准方法。基于实验结果,我们提出了三个关键假说,以确定该领域当前的研究限制和未来方向。