Recent advances in diffusion models have revolutionized audio-driven talking head synthesis. Beyond precise lip synchronization, diffusion-based methods excel in generating subtle expressions and natural head movements that are well-aligned with the audio signal. However, these methods are confronted by slow inference speed, insufficient fine-grained control over facial motions, and occasional visual artifacts largely due to an implicit latent space derived from Variational Auto-Encoders (VAE), which prevent their adoption in realtime interaction applications. To address these issues, we introduce Ditto, a diffusion-based framework that enables controllable realtime talking head synthesis. Our key innovation lies in bridging motion generation and photorealistic neural rendering through an explicit identity-agnostic motion space, replacing conventional VAE representations. This design substantially reduces the complexity of diffusion learning while enabling precise control over the synthesized talking heads. We further propose an inference strategy that jointly optimizes three key components: audio feature extraction, motion generation, and video synthesis. This optimization enables streaming processing, realtime inference, and low first-frame delay, which are the functionalities crucial for interactive applications such as AI assistants. Extensive experimental results demonstrate that Ditto generates compelling talking head videos and substantially outperforms existing methods in both motion control and realtime performance.
翻译:近年来,扩散模型在音频驱动的说话头部合成领域取得了革命性进展。除了实现精确的唇部同步外,基于扩散的方法在生成与音频信号高度对齐的细微表情和自然头部运动方面表现出色。然而,这些方法面临着推理速度缓慢、对面部运动的细粒度控制不足以及偶发的视觉伪影等问题,这些问题主要源于变分自编码器(VAE)衍生的隐式潜在空间,阻碍了其在实时交互应用中的采用。为解决上述问题,我们提出了Ditto——一个基于扩散的框架,能够实现可控的实时说话头部合成。我们的核心创新在于通过一个显式的身份无关运动空间,将运动生成与逼真神经渲染相连接,从而替代了传统的VAE表示。该设计在显著降低扩散学习复杂度的同时,实现了对合成说话头部的精确控制。我们进一步提出了一种联合优化三个关键组件的推理策略:音频特征提取、运动生成和视频合成。该优化实现了流式处理、实时推理和低首帧延迟,这些功能对于AI助手等交互应用至关重要。大量实验结果表明,Ditto能够生成极具吸引力的说话头部视频,并在运动控制和实时性能方面显著优于现有方法。