In the task of talking face generation, the objective is to generate a face video with lips synchronized to the corresponding audio while preserving visual details and identity information. Current methods face the challenge of learning accurate lip synchronization while avoiding detrimental effects on visual quality, as well as robustly evaluating such synchronization. To tackle these problems, we propose utilizing an audio-visual speech representation expert (AV-HuBERT) for calculating lip synchronization loss during training. Moreover, leveraging AV-HuBERT's features, we introduce three novel lip synchronization evaluation metrics, aiming to provide a comprehensive assessment of lip synchronization performance. Experimental results, along with a detailed ablation study, demonstrate the effectiveness of our approach and the utility of the proposed evaluation metrics.
翻译:在说话人脸生成任务中,目标是生成一个面部视频,使其嘴唇与对应音频同步,同时保持视觉细节和身份信息。当前方法面临学习精确唇同步的同时避免对视觉质量产生不利影响的挑战,以及如何稳健评估这种同步的问题。为解决这些问题,我们提出在训练过程中利用音视频语音表征专家(AV-HuBERT)来计算唇同步损失。此外,利用AV-HuBERT的特征,我们引入了三种新颖的唇同步评估指标,旨在提供对唇同步性能的全面评估。实验结果与详细的消融研究证明了我们方法的有效性以及所提出评估指标的实用性。