The 3D Gaussian Splatting (3D-GS) is a novel method for scene representation and view synthesis. Although Scaffold-GS achieves higher quality real-time rendering compared to the original 3D-GS, its fine-grained rendering of the scene is extremely dependent on adequate viewing angles. The spectral bias of neural network learning results in Scaffold-GS's poor ability to perceive and learn high-frequency information in the scene. In this work, we propose enhancing the manifold complexity of input features and using network-based feature map loss to improve the image reconstruction quality of 3D-GS models. We introduce AH-GS, which enables 3D Gaussians in structurally complex regions to obtain higher-frequency encodings, allowing the model to more effectively learn the high-frequency information of the scene. Additionally, we incorporate high-frequency reinforce loss to further enhance the model's ability to capture detailed frequency information. Our result demonstrates that our model significantly improves rendering fidelity, and in specific scenarios (e.g., MipNeRf360-garden), our method exceeds the rendering quality of Scaffold-GS in just 15K iterations.
翻译:三维高斯溅射(3D-GS)是一种用于场景表示与视图合成的新方法。尽管Scaffold-GS相较于原始3D-GS实现了更高质量的实时渲染,但其对场景的细粒度渲染极度依赖充足的观测视角。神经网络学习的频谱偏差导致Scaffold-GS对场景中高频信息的感知与学习能力较弱。本工作中,我们提出通过增强输入特征的流形复杂度并采用基于网络的特征图损失来提升3D-GS模型的图像重建质量。我们提出的AH-GS方法,使结构复杂区域的三维高斯体能够获得更高频率的编码,从而让模型更有效地学习场景的高频信息。此外,我们引入高频强化损失以进一步增强模型捕捉细节频率信息的能力。实验结果表明,我们的模型显著提升了渲染保真度,在特定场景(如MipNeRf360-garden)中,本方法仅需15K次迭代即可超越Scaffold-GS的渲染质量。