We introduce FaceTalk, a novel generative approach designed for synthesizing high-fidelity 3D motion sequences of talking human heads from input audio signal. To capture the expressive, detailed nature of human heads, including hair, ears, and finer-scale eye movements, we propose to couple speech signal with the latent space of neural parametric head models to create high-fidelity, temporally coherent motion sequences. We propose a new latent diffusion model for this task, operating in the expression space of neural parametric head models, to synthesize audio-driven realistic head sequences. In the absence of a dataset with corresponding NPHM expressions to audio, we optimize for these correspondences to produce a dataset of temporally-optimized NPHM expressions fit to audio-video recordings of people talking. To the best of our knowledge, this is the first work to propose a generative approach for realistic and high-quality motion synthesis of volumetric human heads, representing a significant advancement in the field of audio-driven 3D animation. Notably, our approach stands out in its ability to generate plausible motion sequences that can produce high-fidelity head animation coupled with the NPHM shape space. Our experimental results substantiate the effectiveness of FaceTalk, consistently achieving superior and visually natural motion, encompassing diverse facial expressions and styles, outperforming existing methods by 75% in perceptual user study evaluation.
翻译:我们提出FaceTalk,一种新颖的生成式方法,旨在从输入音频信号合成高保真度的说话人头部3D运动序列。为捕捉人类头部的表现力与细节特性(包括头发、耳朵及精细的眼部运动),我们提出将语音信号与神经参数化头部模型的潜在空间耦合,以生成高保真度、时间一致的运动序列。针对该任务,我们设计了一种新型潜在扩散模型,该模型在神经参数化头部模型的表达空间中运行,以合成音频驱动的逼真头部序列。鉴于当前缺乏与NPHM表达相对应的音频数据集,我们通过优化这些对应关系,基于说话人的音视频记录构建了一个时间维优化后的NPHM表达数据集。据我们所知,这是首个提出生成式方法实现体积化人体头部逼真且高质量运动合成的工作,标志着音频驱动3D动画领域的重要突破。值得注意的是,我们的方法在生成与NPHM形状空间耦合的高保真头部动画时,能够产生合理的运动序列。实验结果充分验证了FaceTalk的有效性,其持续生成包含多样化面部表情与风格的优越且视觉自然的运动,在感知用户研究评估中较现有方法提升75%。