Generating unbounded 3D scenes is crucial for large-scale scene understanding and simulation. Urban scenes, unlike natural landscapes, consist of various complex man-made objects and structures such as roads, traffic signs, vehicles, and buildings. To create a realistic and detailed urban scene, it is crucial to accurately represent the geometry and semantics of the underlying objects, going beyond their visual appearance. In this work, we propose UrbanDiffusion, a 3D diffusion model that is conditioned on a Bird's-Eye View (BEV) map and generates an urban scene with geometry and semantics in the form of semantic occupancy map. Our model introduces a novel paradigm that learns the data distribution of scene-level structures within a latent space and further enables the expansion of the synthesized scene into an arbitrary scale. After training on real-world driving datasets, our model can generate a wide range of diverse urban scenes given the BEV maps from the held-out set and also generalize to the synthesized maps from a driving simulator. We further demonstrate its application to scene image synthesis with a pretrained image generator as a prior.
翻译:生成无边界的三维场景对于大规模场景理解与仿真至关重要。与自然景观不同,城市场景包含道路、交通标志、车辆和建筑等各类复杂人造物体和结构。为创建逼真细致的城市场景,除视觉外观外,准确表征底层物体的几何与语义信息至关重要。本文提出UrbanDiffusion模型,这是一种以鸟瞰图(BEV)为条件的三维扩散模型,能够以语义占据图形式生成包含几何与语义信息的城市场景。该模型创新性地在潜空间中学习场景级结构的数据分布,并支持将合成场景扩展至任意尺度。在真实驾驶数据集上训练后,模型不仅能基于保留集鸟瞰图生成多样化的城市场景,还能泛化至驾驶模拟器生成的合成地图。我们进一步展示了该模型与预训练图像生成器结合在场景图像合成中的应用。