Gaussian Splatting has emerged as a high-performance technique for novel view synthesis, enabling real-time rendering and high-quality reconstruction of small scenes. However, scaling to larger environments has so far relied on partitioning the scene into chunks -- a strategy that introduces artifacts at chunk boundaries, complicates training across varying scales, and is poorly suited to unstructured scenarios such as city-scale flyovers combined with street-level views. Moreover, rendering remains fundamentally limited by GPU memory, as all visible chunks must reside in VRAM simultaneously. We introduce A LoD of Gaussians, a framework for training and rendering ultra-large-scale Gaussian scenes on a single consumer-grade GPU -- without partitioning. Our method stores the full scene out-of-core (e.g., in CPU memory) and trains a Level-of-Detail (LoD) representation directly, dynamically streaming only the relevant Gaussians. A hybrid data structure combining Gaussian hierarchies with Sequential Point Trees enables efficient, view-dependent LoD selection, while a lightweight caching and view scheduling system exploits temporal coherence to support real-time streaming and rendering. Together, these innovations enable seamless multi-scale reconstruction and interactive visualization of complex scenes -- from broad aerial views to fine-grained ground-level details.
翻译:高斯泼溅已成为新颖视角合成的高性能技术,能够实现小场景的实时渲染与高质量重建。然而,现有方法在扩展至更大环境时仍需依赖场景分块策略——这种策略会在区块边界引入伪影,使跨尺度训练复杂化,且难以适应非结构化场景(如城市尺度航拍与街景结合的视图)。此外,渲染性能本质上受限于GPU内存,因为所有可见区块必须同时驻留于显存。本文提出高斯细节层次框架,支持在单张消费级GPU上实现超大规模高斯场景的统一训练与渲染——无需进行场景分区。本方法将完整场景存储于核外存储器(如CPU内存),直接训练细节层次表示,并动态流式传输相关高斯元素。通过结合高斯层次结构与序列点树的混合数据结构,实现了高效的视点相关细节层次选择;同时,轻量级缓存与视图调度系统利用时间连贯性,支持实时流式传输与渲染。这些创新技术共同实现了复杂场景的无缝多尺度重建与交互式可视化——从宏观航拍视角到细粒度地面细节皆可覆盖。