Talking-head animation focuses on generating realistic facial videos from audio input. Following Generative Adversarial Networks (GANs), diffusion models have become the mainstream, owing to their robust generative capacities. However, inherent limitations of the diffusion process often lead to inter-frame flicker and slow inference, restricting their practical deployment. To address this, we introduce AvatarSync, an autoregressive framework on phoneme representations that generates realistic and controllable talking-head animations from a single reference image, driven directly by text or audio input. To mitigate flicker and ensure continuity, AvatarSync leverages an autoregressive pipeline that enhances temporal modeling. In addition, to ensure controllability, we introduce phonemes, which are the basic units of speech sounds, and construct a many-to-one mapping from text/audio to phonemes, enabling precise phoneme-to-visual alignment. Additionally, to further accelerate inference, we adopt a two-stage generation strategy that decouples semantic modeling from visual dynamics, and incorporate a customized Phoneme-Frame Causal Attention Mask to support multi-step parallel acceleration. Extensive experiments conducted on both Chinese (CMLR) and English (HDTF) datasets demonstrate that AvatarSync outperforms existing talking-head animation methods in visual fidelity, temporal consistency, and computational efficiency, providing a scalable and controllable solution.
翻译:说话头部动画旨在根据音频输入生成逼真的面部视频。继生成对抗网络(GANs)之后,扩散模型凭借其强大的生成能力已成为主流方法。然而,扩散过程固有的局限性常导致帧间闪烁和推理速度缓慢,限制了其实际部署。为此,我们提出了AvatarSync,一种基于音素表示的自回归框架,能够通过单张参考图像,在文本或音频的直接驱动下生成逼真且可控的说话头部动画。为减轻闪烁并确保连续性,AvatarSync采用了一种增强时序建模的自回归流程。此外,为确保可控性,我们引入了语音的基本单元——音素,并构建了从文本/音频到音素的多对一映射,实现了精确的音素-视觉对齐。同时,为进一步加速推理,我们采用了两阶段生成策略,将语义建模与视觉动态解耦,并引入了定制的音素-帧因果注意力掩码以支持多步并行加速。在中文(CMLR)和英文(HDTF)数据集上进行的大量实验表明,AvatarSync在视觉保真度、时序一致性和计算效率方面均优于现有的说话头部动画方法,提供了一种可扩展且可控的解决方案。