The increasing demand for virtual reality applications has highlighted the significance of crafting immersive 3D assets. We present a text-to-3D 360$^{\circ}$ scene generation pipeline that facilitates the creation of comprehensive 360$^{\circ}$ scenes for in-the-wild environments in a matter of minutes. Our approach utilizes the generative power of a 2D diffusion model and prompt self-refinement to create a high-quality and globally coherent panoramic image. This image acts as a preliminary "flat" (2D) scene representation. Subsequently, it is lifted into 3D Gaussians, employing splatting techniques to enable real-time exploration. To produce consistent 3D geometry, our pipeline constructs a spatially coherent structure by aligning the 2D monocular depth into a globally optimized point cloud. This point cloud serves as the initial state for the centroids of 3D Gaussians. In order to address invisible issues inherent in single-view inputs, we impose semantic and geometric constraints on both synthesized and input camera views as regularizations. These guide the optimization of Gaussians, aiding in the reconstruction of unseen regions. In summary, our method offers a globally consistent 3D scene within a 360$^{\circ}$ perspective, providing an enhanced immersive experience over existing techniques. Project website at: http://dreamscene360.github.io/
翻译:随着虚拟现实应用需求的日益增长,构建沉浸式三维资产的重要性愈发凸显。本文提出一种文本驱动360°场景生成流水线,可在数分钟内为开放环境创建完整的全景三维场景。该方法利用二维扩散模型的生成能力与提示自优化技术,生成高质量且全局一致的全景图像,将其作为初始的二维场景表征。随后,该表征通过高斯散列技术提升为三维高斯体,实现实时交互式探索。为生成连贯的三维几何结构,本流水线将二维单目深度图对齐至全局优化的点云,构建空间一致的结构,该点云作为三维高斯质心的初始状态。针对单视角输入固有的不可见区域问题,我们在合成视角与输入相机视角上联合施加语义与几何约束作为正则化项,引导高斯优化以重建未观测区域。综上,我们的方法能够在360°视角下提供全局一致的三维场景,相较现有技术显著提升沉浸式体验。项目网站:http://dreamscene360.github.io/