Recently, 3D Gaussian splatting (3D-GS) has gained popularity in novel-view scene synthesis. It addresses the challenges of lengthy training times and slow rendering speeds associated with Neural Radiance Fields (NeRFs). Through rapid, differentiable rasterization of 3D Gaussians, 3D-GS achieves real-time rendering and accelerated training. They, however, demand substantial memory resources for both training and storage, as they require millions of Gaussians in their point cloud representation for each scene. We present a technique utilizing quantized embeddings to significantly reduce per-point memory storage requirements and a coarse-to-fine training strategy for a faster and more stable optimization of the Gaussian point clouds. Our approach develops a pruning stage which results in scene representations with fewer Gaussians, leading to faster training times and rendering speeds for real-time rendering of high resolution scenes. We reduce storage memory by more than an order of magnitude all while preserving the reconstruction quality. We validate the effectiveness of our approach on a variety of datasets and scenes preserving the visual quality while consuming 10-20x lesser memory and faster training/inference speed. Project page and code is available https://efficientgaussian.github.io
翻译:近年来,三维高斯溅射(3D-GS)在新视角场景合成领域受到广泛关注。该方法解决了神经辐射场(NeRF)训练耗时长、渲染速度慢的挑战。通过对三维高斯进行快速可微分光栅化,3D-GS实现了实时渲染与加速训练。然而,该方法在训练和存储阶段均需要大量内存资源,因为每个场景的点云表示需要数百万个高斯元。本文提出一种利用量化嵌入的技术,显著降低了逐点内存存储需求,并采用由粗到精的训练策略以实现高斯点云更快、更稳定的优化。我们的方法开发了剪枝阶段,从而获得高斯元数量更少的场景表示,实现了高分辨率场景实时渲染的更快训练速度与渲染速度。在保持重建质量的同时,我们将存储内存降低了一个数量级以上。我们在多种数据集和场景上验证了方法的有效性,在保持视觉质量的同时,内存消耗降低10-20倍,训练/推理速度显著提升。项目页面与代码详见 https://efficientgaussian.github.io