Gaussian Splatting has revolutionized the field of Novel View Synthesis (NVS) with faster training and real-time rendering. However, its reconstruction fidelity still trails behind the powerful radiance models such as Zip-NeRF. Motivated by our theoretical result that both queries (such as coordinates) and neighborhood are important to learn high-fidelity signals, this paper proposes Queried-Convolutions (Qonvolutions), a simple yet powerful modification using the neighborhood properties of convolution. Qonvolutions convolve a low-fidelity signal with queries to output residual and achieve high-fidelity reconstruction. We empirically demonstrate that combining Gaussian splatting with Qonvolution neural networks (QNNs) results in state-of-the-art NVS on real-world scenes, even outperforming Zip-NeRF on image fidelity. QNNs also enhance performance of 1D regression, 2D regression and 2D super-resolution tasks.
翻译:高斯溅射技术凭借更快的训练速度和实时渲染能力,在新视角合成领域引发了革命性变革。然而,其重建保真度仍落后于Zip-NeRF等强大的辐射场模型。基于我们关于查询信息(如坐标)与邻域特征对学习高保真信号均至关重要的理论发现,本文提出查询卷积——一种利用卷积邻域特性的简洁而有效的改进方法。该技术通过对低保真信号与查询信息进行卷积运算来输出残差,从而实现高保真重建。实验表明,将高斯溅射与查询卷积神经网络相结合,能够在真实场景的新视角合成任务中取得最先进的性能,甚至在图像保真度上超越Zip-NeRF。该网络架构同时在一维回归、二维回归及二维超分辨率任务中展现出性能提升。