While recent research has made significant progress in speech-driven talking face generation, the quality of the generated video still lags behind that of real recordings. One reason for this is the use of handcrafted intermediate representations like facial landmarks and 3DMM coefficients, which are designed based on human knowledge and are insufficient to precisely describe facial movements. Additionally, these methods require an external pretrained model for extracting these representations, whose performance sets an upper bound on talking face generation. To address these limitations, we propose a novel method called DAE-Talker that leverages data-driven latent representations obtained from a diffusion autoencoder (DAE). DAE contains an image encoder that encodes an image into a latent vector and a DDIM image decoder that reconstructs the image from it. We train our DAE on talking face video frames and then extract their latent representations as the training target for a Conformer-based speech2latent model. This allows DAE-Talker to synthesize full video frames and produce natural head movements that align with the content of speech, rather than relying on a predetermined head pose from a template video. We also introduce pose modelling in speech2latent for pose controllability. Additionally, we propose a novel method for generating continuous video frames with the DDIM image decoder trained on individual frames, eliminating the need for modelling the joint distribution of consecutive frames directly. Our experiments show that DAE-Talker outperforms existing popular methods in lip-sync, video fidelity, and pose naturalness. We also conduct ablation studies to analyze the effectiveness of the proposed techniques and demonstrate the pose controllability of DAE-Talker.
翻译:尽管近期研究在语音驱动说话人脸生成方面取得了显著进展,但生成视频的质量仍落后于真实录制视频。其原因之一在于使用基于人工知识设计的中间表示(如面部关键点和3DMM系数),这些表示难以精确描述面部运动。此外,此类方法需要借助外部预训练模型提取这些表示,其性能为说话人脸生成设定了上限。为解决上述局限,我们提出名为DAE-Talker的新方法,该方法利用从扩散自编码器(DAE)中获取的数据驱动潜在表示。DAE包含将图像编码为潜在向量的图像编码器,以及从潜在向量重建图像的DDIM图像解码器。我们在说话人脸视频帧上训练DAE,并将其潜在表示作为基于Conformer的语音转潜在模型(speech2latent)的训练目标。这使得DAE-Talker能够合成完整视频帧,生成与语音内容协调的自然头部运动,而非依赖模板视频的预设头部姿态。我们还为speech2latent引入姿态建模以实现姿态可控性。此外,我们提出一种新方法,利用在单帧上训练的DDIM图像解码器生成连续视频帧,从而无需直接对连续帧的联合分布建模。实验表明,DAE-Talker在唇形同步、视频保真度和姿态自然度方面均优于现有主流方法。我们通过消融实验分析所提出技术的有效性,并展示DAE-Talker的姿态可控性。