Gaussian Splattings demonstrate impressive results in multi-view reconstruction based on Gaussian explicit representations. However, the current Gaussian primitives only have a single view-dependent color and an opacity to represent the appearance and geometry of the scene, resulting in a non-compact representation. In this paper, we introduce a new method called SuperGaussians that utilizes spatially varying colors and opacity in a single Gaussian primitive to improve its representation ability. We have implemented bilinear interpolation, movable kernels, and even tiny neural networks as spatially varying functions. Quantitative and qualitative experimental results demonstrate that all three functions outperform the baseline, with the best movable kernels achieving superior novel view synthesis performance on multiple datasets, highlighting the strong potential of spatially varying functions.
翻译:高斯泼溅技术基于高斯显式表示在多视角重建中展现出令人印象深刻的结果。然而,当前的高斯基元仅具备单一视角相关颜色和透明度来表示场景的外观与几何结构,导致表示形式不够紧凑。本文提出一种名为SuperGaussians的新方法,通过在单个高斯基元中利用空间变化的颜色与透明度来提升其表示能力。我们实现了双线性插值、可移动核函数乃至微型神经网络作为空间变化函数。定量与定性实验结果表明,这三种函数均优于基线方法,其中最佳的可移动核函数在多个数据集上实现了卓越的新视角合成性能,凸显了空间变化函数的强大潜力。