In recent years, a range of neural network-based methods for image rendering have been introduced. For instance, widely-researched neural radiance fields (NeRF) rely on a neural network to represent 3D scenes, allowing for realistic view synthesis from a small number of 2D images. However, most NeRF models are constrained by long training and inference times. In comparison, Gaussian Splatting (GS) is a novel, state-of-theart technique for rendering points in a 3D scene by approximating their contribution to image pixels through Gaussian distributions, warranting fast training and swift, real-time rendering. A drawback of GS is the absence of a well-defined approach for its conditioning due to the necessity to condition several hundred thousand Gaussian components. To solve this, we introduce Gaussian Mesh Splatting (GaMeS) model, a hybrid of mesh and a Gaussian distribution, that pin all Gaussians splats on the object surface (mesh). The unique contribution of our methods is defining Gaussian splats solely based on their location on the mesh, allowing for automatic adjustments in position, scale, and rotation during animation. As a result, we obtain high-quality renders in the real-time generation of high-quality views. Furthermore, we demonstrate that in the absence of a predefined mesh, it is possible to fine-tune the initial mesh during the learning process.
翻译:近年来,一系列基于神经网络的图像渲染方法被提出。例如,广泛研究的神经辐射场(NeRF)依赖神经网络表示三维场景,能够从少量二维图像生成逼真的视角合成。然而,大多数NeRF模型受限于较长的训练和推理时间。相比之下,高斯溅射(GS)是一种新颖的最先进技术,通过高斯分布近似三维场景中点的像素贡献,实现快速训练和实时渲染。GS的一个缺陷是缺乏对其条件化的明确方法,因为需要条件化数十万个高斯分量。为解决此问题,我们引入了高斯网格溅射(GaMeS)模型,该模型融合网格与高斯分布,将所有高斯溅射固定在物体表面(网格)上。我们方法的独特贡献在于仅基于网格上的位置定义高斯溅射,从而在动画过程中实现位置、尺度和旋转的自动调整。最终,在实时生成高质量视图的同时,获得高保真渲染结果。此外,我们证明在无预定义网格的情况下,可在学习过程中对初始网格进行微调。