Speech-driven 3D facial animation is challenging due to the scarcity of large-scale visual-audio datasets despite extensive research. Most prior works, typically focused on learning regression models on a small dataset using the method of least squares, encounter difficulties generating diverse lip movements from speech and require substantial effort in refining the generated outputs. To address these issues, we propose a speech-driven 3D facial animation with a diffusion model (SAiD), a lightweight Transformer-based U-Net with a cross-modality alignment bias between audio and visual to enhance lip synchronization. Moreover, we introduce BlendVOCA, a benchmark dataset of pairs of speech audio and parameters of a blendshape facial model, to address the scarcity of public resources. Our experimental results demonstrate that the proposed approach achieves comparable or superior performance in lip synchronization to baselines, ensures more diverse lip movements, and streamlines the animation editing process.
翻译:语音驱动的三维面部动画因大规模视觉-音频数据集的稀缺而颇具挑战性,尽管已有广泛研究。以往的大多数工作通常侧重于使用最小二乘法在小数据集上训练回归模型,难以从语音生成多样化的嘴唇运动,且需要大量精力来优化生成的输出。为解决这些问题,我们提出了一种基于扩散模型的语音驱动三维面部动画方法(SAiD),这是一种轻量级的基于Transformer的U-Net结构,并引入了音频与视觉之间的跨模态对齐偏差以增强唇形同步。此外,我们引入了BlendVOCA这一基准数据集,包含语音音频与Blendshape面部模型参数的配对数据,以缓解公共资源匮乏的问题。实验结果表明,所提方法在唇形同步方面达到或超越了基线模型的性能,确保了更多样化的嘴唇运动,并简化了动画编辑流程。