Synthesizing personalized talking faces that uphold and highlight a speaker's unique style while maintaining lip-sync accuracy remains a significant challenge. A primary limitation of existing approaches is the intrinsic confounding of speaker-specific talking style and semantic content within facial motions, which prevents the faithful transfer of a speaker's unique persona to arbitrary speech. In this paper, we propose MirrorTalk, a generative framework based on a conditional diffusion model, combined with a Semantically-Disentangled Style Encoder (SDSE) that can distill pure style representations from a brief reference video. To effectively utilize this representation, we further introduce a hierarchical modulation strategy within the diffusion process. This mechanism guides the synthesis by dynamically balancing the contributions of audio and style features across distinct facial regions, ensuring both precise lip-sync accuracy and expressive full-face dynamics. Extensive experiments demonstrate that MirrorTalk achieves significant improvements over state-of-the-art methods in terms of lip-sync accuracy and personalization preservation.
翻译:合成既保持唇语同步准确性,又能体现并突出说话者独特风格的个性化说话人脸,仍然是一个重大挑战。现有方法的一个主要局限在于,说话者特有的谈话风格与面部运动中的语义内容存在内在混淆,这阻碍了将说话者的独特个性忠实地迁移到任意语音上。本文提出MirrorTalk,一个基于条件扩散模型的生成框架,并结合了一个语义解耦风格编码器,该编码器能够从简短的参考视频中提取出纯净的风格表示。为了有效利用这一表示,我们进一步在扩散过程中引入了一种分层调制策略。该机制通过动态平衡音频和风格特征在不同面部区域的贡献来指导合成过程,从而确保精确的唇语同步准确性和富有表现力的全脸动态。大量实验表明,MirrorTalk在唇语同步准确性和个性化保持方面,相较于现有最先进方法取得了显著提升。