We present NBAvatar - a method for realistic rendering of head avatars handling non-rigid deformations caused by hand-face interaction. We introduce a novel representation for animated avatars by combining the training of oriented planar primitives with neural rendering. Such a combination of explicit and implicit representations enables NBAvatar to handle temporally and pose-consistent geometry, along with fine-grained appearance details provided by the neural rendering technique. In our experiments, we demonstrate that NBAvatar implicitly learns color transformations caused by face-hand interactions and surpasses existing approaches in terms of novel-view and novel-pose rendering quality. Specifically, NBAvatar achieves up to 30% LPIPS reduction under high-resolution megapixel rendering compared to Gaussian-based avatar methods, while also improving PSNR and SSIM, and achieves higher structural similarity compared to the state-of-the-art hand-face interaction method InteractAvatar.
翻译:我们提出NBAvatar——一种用于逼真渲染头部化身的、能处理手脸交互引起的非刚性变形的方法。我们通过将定向平面图元训练与神经渲染相结合,为动态化身引入了一种新颖的表征方式。这种显式与隐式表征的结合使NBAvatar能够处理时间与姿态一致的几何结构,同时保留神经渲染技术所提供的细粒度外观细节。在我们的实验中,我们证明NBAvatar能够隐式学习由手脸交互引起的色彩变换,并在新视角与新姿态的渲染质量方面超越了现有方法。具体而言,与基于高斯分布的化身方法相比,NBAvatar在高分辨率百万像素渲染下实现了高达30%的LPIPS降低,同时提升了PSNR和SSIM;与最先进的手脸交互方法InteractAvatar相比,NBAvatar获得了更高的结构相似性。