We present a new approach, termed GPS-Gaussian, for synthesizing novel views of a character in a real-time manner. The proposed method enables 2K-resolution rendering under a sparse-view camera setting. Unlike the original Gaussian Splatting or neural implicit rendering methods that necessitate per-subject optimizations, we introduce Gaussian parameter maps defined on the source views and regress directly Gaussian Splatting properties for instant novel view synthesis without any fine-tuning or optimization. To this end, we train our Gaussian parameter regression module on a large amount of human scan data, jointly with a depth estimation module to lift 2D parameter maps to 3D space. The proposed framework is fully differentiable and experiments on several datasets demonstrate that our method outperforms state-of-the-art methods while achieving an exceeding rendering speed.
翻译:本文提出了一种名为GPS-Gaussian的新方法,用于实时合成人物的新视角图像。该方法可在稀疏相机设置下实现2K分辨率渲染。与需要针对每个对象进行优化的原始高斯泼溅或神经隐式渲染方法不同,我们引入了在源视图上定义的高斯参数图,并直接回归出高斯泼溅属性,无需任何微调或优化即可即时合成新视角。为此,我们在大量人体扫描数据上训练高斯参数回归模块,并结合深度估计模块将二维参数图提升至三维空间。所提出的框架是完全可微的,多个数据集上的实验表明,我们的方法在实现极高渲染速度的同时,性能优于现有最先进方法。