In this study, we propose AniPortrait, a novel framework for generating high-quality animation driven by audio and a reference portrait image. Our methodology is divided into two stages. Initially, we extract 3D intermediate representations from audio and project them into a sequence of 2D facial landmarks. Subsequently, we employ a robust diffusion model, coupled with a motion module, to convert the landmark sequence into photorealistic and temporally consistent portrait animation. Experimental results demonstrate the superiority of AniPortrait in terms of facial naturalness, pose diversity, and visual quality, thereby offering an enhanced perceptual experience. Moreover, our methodology exhibits considerable potential in terms of flexibility and controllability, which can be effectively applied in areas such as facial motion editing or face reenactment. We release code and model weights at https://github.com/scutzzj/AniPortrait
翻译:在本研究中,我们提出了AniPortrait,一种由音频和参考人像图像驱动生成高质量动画的新型框架。我们的方法分为两个阶段。首先,我们从音频中提取3D中间表征,并将其投影为二维面部关键点序列。随后,我们采用带有运动模块的鲁棒扩散模型,将该关键点序列转化为逼真且时间一致的人像动画。实验结果表明,AniPortrait在面部自然度、姿态多样性和视觉质量方面具有优越性,从而提供增强的感知体验。此外,我们的方法在灵活性和可控性方面展现出显著潜力,可有效应用于面部运动编辑或面部重演等场景。我们在 https://github.com/scutzzj/AniPortrait 公开发布代码和模型权重。