Recent advances in 3D Gaussian Splatting (3D-GS) have shown remarkable success in representing 3D scenes and generating high-quality, novel views in real-time. However, 3D-GS and its variants assume that input images are captured based on pinhole imaging and are fully in focus. This assumption limits their applicability, as real-world images often feature shallow depth-of-field (DoF). In this paper, we introduce DoF-Gaussian, a controllable depth-of-field method for 3D-GS. We develop a lens-based imaging model based on geometric optics principles to control DoF effects. To ensure accurate scene geometry, we incorporate depth priors adjusted per scene, and we apply defocus-to-focus adaptation to minimize the gap in the circle of confusion. We also introduce a synthetic dataset to assess refocusing capabilities and the model's ability to learn precise lens parameters. Our framework is customizable and supports various interactive applications. Extensive experiments confirm the effectiveness of our method. Our project is available at https://dof-gaussian.github.io.
翻译:近期,3D高斯溅射(3D-GS)在三维场景表示与实时高质量新视角生成方面取得了显著成功。然而,3D-GS及其变体均假设输入图像基于针孔成像模型且完全对焦。这一假设限制了其应用范围,因为真实世界图像常呈现浅景深(DoF)效果。本文提出DoF-Gaussian——一种面向3D-GS的可控景深方法。我们基于几何光学原理构建了镜头成像模型以实现景深效果调控。为确保精确的场景几何结构,我们引入了按场景调整的深度先验,并采用散焦至聚焦自适应策略以弥合弥散圆差异。此外,我们构建了合成数据集以评估重对焦能力及模型学习精确镜头参数的性能。本框架具备高度可定制性,支持多种交互式应用。大量实验验证了本方法的有效性。项目地址:https://dof-gaussian.github.io。