The emergence of 3D Gaussian Splatting has fundamentally redefined the capabilities of photorealistic neural rendering by enabling high-throughput synthesis of complex environments. While procedural methods like Wang Tiles have recently been integrated to facilitate the generation of expansive landscapes, these systems typically remain constrained by a reliance on densely sampled exemplar reconstructions. We present DAV-GSWT, a data-efficient framework that leverages diffusion priors and active view sampling to synthesize high-fidelity Gaussian Splatting Wang Tiles from minimal input observations. By integrating a hierarchical uncertainty quantification mechanism with generative diffusion models, our approach autonomously identifies the most informative viewpoints while hallucinating missing structural details to ensure seamless tile transitions. Experimental results indicate that our system significantly reduces the required data volume while maintaining the visual integrity and interactive performance necessary for large-scale virtual environments.
翻译:三维高斯溅射技术的出现,通过实现复杂环境的高通量合成,从根本上重新定义了逼真神经渲染的能力。尽管诸如Wang Tiles等程序化方法近期已被集成以促进广阔场景的生成,但这些系统通常仍受限于对密集采样范例重建的依赖。本文提出DAV-GSWT,一种数据高效框架,该框架利用扩散先验与主动视角采样,从极少的输入观测中合成高保真的高斯溅射Wang Tiles。通过将分层不确定性量化机制与生成式扩散模型相结合,我们的方法能自主识别信息量最大的视角,同时幻觉化缺失的结构细节,以确保瓦片间的无缝过渡。实验结果表明,我们的系统在保持大规模虚拟环境所需视觉完整性与交互性能的同时,显著降低了所需数据量。