In the era of digital animation, the quest to produce lifelike facial animations for virtual characters has led to the development of various retargeting methods. While the retargeting facial motion between models of similar shapes has been very successful, challenges arise when the retargeting is performed on stylized or exaggerated 3D characters that deviate significantly from human facial structures. In this scenario, it is important to consider the target character's facial structure and possible range of motion to preserve the semantics assumed by the original facial motions after the retargeting. To achieve this, we propose a local patch-based retargeting method that transfers facial animations captured in a source performance video to a target stylized 3D character. Our method consists of three modules. The Automatic Patch Extraction Module extracts local patches from the source video frame. These patches are processed through the Reenactment Module to generate correspondingly re-enacted target local patches. The Weight Estimation Module calculates the animation parameters for the target character at every frame for the creation of a complete facial animation sequence. Extensive experiments demonstrate that our method can successfully transfer the semantic meaning of source facial expressions to stylized characters with considerable variations in facial feature proportion.
翻译:在数字动画时代,为虚拟角色生成逼真面部动画的需求催生了多种重定向方法的发展。虽然相似形状模型间的面部运动重定向已非常成功,但当重定向应用于偏离人类面部结构的风格化或夸张三维角色时,仍面临诸多挑战。在此场景下,必须考虑目标角色的面部结构及可能运动范围,以保持原始面部运动在重定向后所蕴含的语义信息。为此,我们提出一种基于局部块的重定向方法,将源表演视频中捕获的面部动画迁移至目标风格化三维角色。该方法包含三个模块:自动块提取模块从源视频帧中提取局部块;重演模块处理这些局部块以生成对应的目标重演局部块;权重估计模块逐帧计算目标角色的动画参数,以构建完整的面部动画序列。大量实验表明,本方法能成功将源面部表情的语义信息迁移至面部特征比例存在显著差异的风格化角色。