Reconstructing deformable tissues from endoscopic stereo videos is essential in many downstream surgical applications. However, existing methods suffer from slow inference speed, which greatly limits their practical use. In this paper, we introduce EndoGaussian, a real-time surgical scene reconstruction framework that builds on 3D Gaussian Splatting. Our framework represents dynamic surgical scenes as canonical Gaussians and a time-dependent deformation field, which predicts Gaussian deformations at novel timestamps. Due to the efficient Gaussian representation and parallel rendering pipeline, our framework significantly accelerates the rendering speed compared to previous methods. In addition, we design the deformation field as the combination of a lightweight encoding voxel and an extremely tiny MLP, allowing for efficient Gaussian tracking with a minor rendering burden. Furthermore, we design a holistic Gaussian initialization method to fully leverage the surface distribution prior, achieved by searching informative points from across the input image sequence. Experiments on public endoscope datasets demonstrate that our method can achieve real-time rendering speed (195 FPS real-time, 100$\times$ gain) while maintaining the state-of-the-art reconstruction quality (35.925 PSNR) and the fastest training speed (within 2 min/scene), showing significant promise for intraoperative surgery applications. Code is available at: \url{https://yifliu3.github.io/EndoGaussian/}.
翻译:从内窥镜立体视频中重建可变形组织是许多下游外科应用的关键需求。然而现有方法因推理速度缓慢而严重制约实际应用。本文提出EndoGaussian——一种基于三维高斯散列的实时手术场景重建框架。该框架将动态手术场景表示为规范高斯体与时间相关形变场的组合,通过形变场预测新时间戳下的高斯形变。得益于高效的高斯表示与并行渲染管线,本框架的渲染速度较先前方法显著提升。此外,我们将形变场设计为轻量级编码体素与极小型MLP的组合,在保证高效高斯追踪的同时仅引入极小渲染负载。更进一步,我们提出全局高斯初始化方法,通过跨输入图像序列搜索信息点,充分挖掘表面分布先验。在公开内窥镜数据集上的实验表明,本方法可实现实时渲染速度(195 FPS实时渲染,100倍加速),同时保持最优重建质量(35.925 PSNR)与最快训练速度(2分钟/场景),在术中手术应用中展现出显著潜力。代码开源地址:\url{https://yifliu3.github.io/EndoGaussian/}