We present a novel approach for enhancing the resolution and geometric fidelity of 3D Gaussian Splatting (3DGS) beyond native training resolution. Current 3DGS methods are fundamentally limited by their input resolution, producing reconstructions that cannot extrapolate finer details than are present in the training views. Our work breaks this limitation through a lightweight generative model that predicts and refines additional 3D Gaussians where needed most. The key innovation is our Hessian-assisted sampling strategy, which intelligently identifies regions that are likely to benefit from densification, ensuring computational efficiency. Unlike computationally intensive GANs or diffusion approaches, our method operates in real-time (0.015s per inference on a single consumer-grade GPU), making it practical for interactive applications. Comprehensive experiments demonstrate significant improvements in both geometric accuracy and rendering quality compared to state-of-the-art methods, establishing a new paradigm for resolution-free 3D scene enhancement.
翻译:本文提出了一种突破性方法,用于将3D高斯泼溅(3DGS)的分辨率与几何保真度提升至超越原始训练分辨率的水平。现有3DGS方法本质上受限于输入分辨率,其重建结果无法推断出比训练视角更精细的细节。本研究通过轻量级生成模型打破了这一局限,该模型能在最需要的位置预测并优化额外的3D高斯元素。核心创新在于基于Hessian矩阵的采样策略,该策略能智能识别可能从密集化中受益的区域,从而确保计算效率。与计算密集的GAN或扩散方法不同,本方法可实现实时运行(在消费级单GPU上每次推理仅需0.015秒),使其适用于交互式应用场景。综合实验表明,相较于现有最优方法,本方法在几何精度与渲染质量方面均取得显著提升,为分辨率无关的3D场景增强确立了新范式。