Recent works have shown that neural radiance fields (NeRFs) on top of parametric models have reached SOTA quality to build photorealistic head avatars from a monocular video. However, one major limitation of the NeRF-based avatars is the slow rendering speed due to the dense point sampling of NeRF, preventing them from broader utility on resource-constrained devices. We introduce LightAvatar, the first head avatar model based on neural light fields (NeLFs). LightAvatar renders an image from 3DMM parameters and a camera pose via a single network forward pass, without using mesh or volume rendering. The proposed approach, while being conceptually appealing, poses a significant challenge towards real-time efficiency and training stability. To resolve them, we introduce dedicated network designs to obtain proper representations for the NeLF model and maintain a low FLOPs budget. Meanwhile, we tap into a distillation-based training strategy that uses a pretrained avatar model as teacher to synthesize abundant pseudo data for training. A warping field network is introduced to correct the fitting error in the real data so that the model can learn better. Extensive experiments suggest that our method can achieve new SOTA image quality quantitatively or qualitatively, while being significantly faster than the counterparts, reporting 174.1 FPS (512x512 resolution) on a consumer-grade GPU (RTX3090) with no customized optimization.
翻译:近期研究表明,基于参数化模型的神经辐射场(NeRFs)在从单目视频构建逼真头部虚拟形象方面已达到最优质量。然而,NeRF 虚拟形象的主要局限在于渲染速度缓慢,这源于 NeRF 的密集点采样机制,使其难以在资源受限设备上广泛应用。我们提出了首个基于神经光场(NeLFs)的头部虚拟形象模型 LightAvatar。该模型通过单次网络前向传播,即可从 3DMM 参数和相机位姿渲染图像,无需依赖网格或体渲染。这一方法虽概念新颖,却面临实时效率与训练稳定性的重大挑战。为此,我们设计了专用网络架构,为 NeLF 模型获取恰当表征并维持较低计算量。同时,我们采用基于蒸馏的训练策略,利用预训练虚拟形象模型作为教师网络生成大量伪数据用于训练。通过引入形变场网络校正真实数据中的拟合误差,使模型获得更优学习效果。大量实验表明,本方法在定量与定性评估中均达到最优图像质量,同时渲染速度显著超越同类方法——在消费级 GPU(RTX3090)上无需定制优化即可实现 174.1 FPS(512x512 分辨率)的渲染性能。