Gaussian Splatting has emerged as a high-performance technique for novel view synthesis, enabling real-time rendering and high-quality reconstruction of small scenes. However, scaling to larger environments has so far relied on partitioning the scene into chunks -- a strategy that introduces artifacts at chunk boundaries, complicates training across varying scales, and is poorly suited to unstructured scenarios such as city-scale flyovers combined with street-level views. Moreover, rendering remains fundamentally limited by GPU memory, as all visible chunks must reside in VRAM simultaneously. We introduce A LoD of Gaussians, a framework for training and rendering ultra-large-scale Gaussian scenes on a single consumer-grade GPU -- without partitioning. Our method stores the full scene out-of-core (e.g., in CPU memory) and trains a Level-of-Detail (LoD) representation directly, dynamically streaming only the relevant Gaussians. A hybrid data structure combining Gaussian hierarchies with Sequential Point Trees enables efficient, view-dependent LoD selection, while a lightweight caching and view scheduling system exploits temporal coherence to support real-time streaming and rendering. Together, these innovations enable seamless multi-scale reconstruction and interactive visualization of complex scenes -- from broad aerial views to fine-grained ground-level details.
翻译:高斯泼溅已成为新颖视角合成的高性能技术,能够实现小场景的实时渲染与高质量重建。然而,扩展到更大环境目前依赖于将场景分割为区块——这种策略会在区块边界引入伪影,使跨尺度训练复杂化,且难以适应非结构化场景(如结合城市尺度鸟瞰与街道级视图)。此外,渲染仍受GPU内存的根本限制,因为所有可见区块必须同时驻留在显存中。我们提出“高斯函数的细节层次”,这是一个在单张消费级GPU上训练和渲染超大规模高斯场景的框架——无需分区。我们的方法将完整场景存储于核外(例如CPU内存),直接训练细节层次表示,并动态流式传输相关高斯函数。结合高斯层次结构与顺序点树的混合数据结构实现了高效的视点相关细节层次选择,而轻量级缓存与视图调度系统利用时间连贯性支持实时流式传输与渲染。这些创新共同实现了复杂场景的无缝多尺度重建与交互式可视化——从广阔的空中视角到精细的地面细节。