Talking head synthesis has become a key research area in computer graphics and multimedia, yet most existing methods often struggle to balance generation quality with computational efficiency. In this paper, we present a novel approach that leverages an Audio Factorization Plane (Audio-Plane) based Gaussian Splatting for high-quality and real-time talking head generation. For modeling a dynamic talking head, 4D volume representation is needed. However, directly storing a dense 4D grid is impractical due to the high cost and lack of scalability for longer durations. We overcome this challenge with the proposed Audio-Plane, where the 4D volume representation is decomposed into audio-independent space planes and audio-dependent planes. This provides a compact and interpretable feature representation for talking head, facilitating more precise audio-aware spatial encoding and enhanced audio-driven lip dynamic modeling. To further improve speech dynamics, we develop a dynamic splatting method that helps the network more effectively focus on modeling the dynamics of the mouth region. Extensive experiments demonstrate that by integrating these innovations with the powerful Gaussian Splatting, our method is capable of synthesizing highly realistic talking videos in real time while ensuring precise audio-lip synchronization. Synthesized results are available in https://sstzal.github.io/Audio-Plane/.
翻译:说话头部合成已成为计算机图形学与多媒体领域的关键研究方向,然而现有方法大多难以在生成质量与计算效率之间取得平衡。本文提出一种新颖方法,利用基于音频因子分解平面(Audio-Plane)的高斯溅射技术,实现高质量实时说话头部生成。为建模动态说话头部,需要4D体积表示。然而,直接存储稠密4D网格因成本高昂且缺乏长时序列可扩展性而不切实际。我们通过所提出的Audio-Plane克服了这一挑战,将4D体积表示分解为音频无关的空间平面和音频相关平面。这为说话头部提供了紧凑且可解释的特征表示,有助于实现更精确的音频感知空间编码和增强的音频驱动唇部动态建模。为进一步提升语音动态表现,我们开发了动态溅射方法,帮助网络更有效地聚焦于嘴部区域的动态建模。大量实验表明,通过将这些创新与强大的高斯溅射技术相结合,我们的方法能够实时合成高度逼真的说话视频,同时确保精确的音频-唇部同步。合成结果详见https://sstzal.github.io/Audio-Plane/。