While recent research has made significant progress in speech-driven talking face generation, the quality of the generated video still lags behind that of real recordings. One reason for this is the use of handcrafted intermediate representations like facial landmarks and 3DMM coefficients, which are designed based on human knowledge and are insufficient to precisely describe facial movements. Additionally, these methods require an external pretrained model for extracting these representations, whose performance sets an upper bound on talking face generation. To address these limitations, we propose a novel method called DAE-Talker that leverages data-driven latent representations obtained from a diffusion autoencoder (DAE). DAE contains an image encoder that encodes an image into a latent vector and a DDIM image decoder that reconstructs the image from it. We train our DAE on talking face video frames and then extract their latent representations as the training target for a Conformer-based speech2latent model. This allows DAE-Talker to synthesize full video frames and produce natural head movements that align with the content of speech, rather than relying on a predetermined head pose from a template video. We also introduce pose modelling in speech2latent for pose controllability. Additionally, we propose a novel method for generating continuous video frames with the DDIM image decoder trained on individual frames, eliminating the need for modelling the joint distribution of consecutive frames directly. Our experiments show that DAE-Talker outperforms existing popular methods in lip-sync, video fidelity, and pose naturalness. We also conduct ablation studies to analyze the effectiveness of the proposed techniques and demonstrate the pose controllability of DAE-Talker.
翻译:尽管近期研究在语音驱动说话人脸生成方面取得了显著进展,但生成视频的质量仍落后于真实录制视频。原因之一在于使用了手工设计的中间表示(如面部关键点和3DMM系数),这些表示基于人类先验知识设计,不足以精确描述面部运动。此外,这些方法需要借助外部预训练模型来提取这些表示,其性能为说话人脸生成设置了上限。为克服这些局限,我们提出了一种名为DAE-Talker的新方法,该方法利用从扩散自编码器(DAE)中获取的数据驱动潜在表示。DAE包含一个将图像编码为潜在向量的图像编码器,以及一个基于DDIM的图像解码器用于从潜在向量重建图像。我们在说话人脸视频帧上训练DAE,随后提取其潜在表示作为基于Conformer的speech2latent模型的训练目标。这使得DAE-Talker能够合成完整视频帧,并生成与语音内容同步的自然头部运动,而非依赖模板视频中预设的头部姿态。我们还在speech2latent中引入姿态建模以实现姿态可控性。此外,我们提出了一种利用在单帧上训练的DDIM图像解码器生成连续视频帧的新方法,无需直接对连续帧的联合分布进行建模。实验表明,DAE-Talker在唇形同步、视频保真度和姿态自然度方面均优于现有主流方法。我们还通过消融实验分析了所提技术的有效性,并验证了DAE-Talker的姿态可控性。