The increasing demand for virtual reality applications has highlighted the significance of crafting immersive 3D assets. We present a text-to-3D 360$^{\circ}$ scene generation pipeline that facilitates the creation of comprehensive 360$^{\circ}$ scenes for in-the-wild environments in a matter of minutes. Our approach utilizes the generative power of a 2D diffusion model and prompt self-refinement to create a high-quality and globally coherent panoramic image. This image acts as a preliminary "flat" (2D) scene representation. Subsequently, it is lifted into 3D Gaussians, employing splatting techniques to enable real-time exploration. To produce consistent 3D geometry, our pipeline constructs a spatially coherent structure by aligning the 2D monocular depth into a globally optimized point cloud. This point cloud serves as the initial state for the centroids of 3D Gaussians. In order to address invisible issues inherent in single-view inputs, we impose semantic and geometric constraints on both synthesized and input camera views as regularizations. These guide the optimization of Gaussians, aiding in the reconstruction of unseen regions. In summary, our method offers a globally consistent 3D scene within a 360$^{\circ}$ perspective, providing an enhanced immersive experience over existing techniques. Project website at: http://dreamscene360.github.io/
翻译:虚拟现实应用日益增长的需求突显了构建沉浸式三维资产的重要性。本文提出一种文本到三维360°场景生成流程,能够在数分钟内为野外环境创建完整的360°场景。该方法利用二维扩散模型的生成能力与提示词自优化机制,生成高质量且全局连贯的全景图像。该图像作为初步的“平面”(二维)场景表征,随后通过溅射技术将其提升为三维高斯分布,实现实时场景探索。为生成一致的三维几何结构,本流程通过将二维单目深度图对齐至全局优化的点云,构建空间相干的结构体系。该点云作为三维高斯分布质心的初始化状态。针对单视图输入固有的不可见区域问题,我们对合成视图与输入相机视图施加语义与几何约束作为正则化项,以此指导高斯分布的优化过程,辅助重建未观测区域。综上所述,本方法能够在360°视角下提供全局一致的三维场景,相较于现有技术显著提升了沉浸式体验。项目网站位于:http://dreamscene360.github.io/