Animating stylized avatars with dynamic poses and expressions has attracted increasing attention for its broad range of applications. Previous research has made significant progress by training controllable generative models to synthesize animations based on reference characteristics, pose, and expression conditions. However, the mechanisms used in these methods to control pose and expression often inadvertently introduce unintended features from the target motion, while also causing a loss of expression-related details, particularly when applied to stylized animation. This paper proposes a new method based on Stable Diffusion, called AniFaceDiff, incorporating a new conditioning module for animating stylized avatars. First, we propose a refined spatial conditioning approach by Facial Alignment to prevent the inclusion of identity characteristics from the target motion. Then, we introduce an Expression Adapter that incorporates additional cross-attention layers to address the potential loss of expression-related information. Our approach effectively preserves pose and expression from the target video while maintaining input image consistency. Extensive experiments demonstrate that our method achieves state-of-the-art results, showcasing superior image quality, preservation of reference features, and expression accuracy, particularly for out-of-domain animation across diverse styles, highlighting its versatility and strong generalization capabilities. This work aims to enhance the quality of virtual stylized animation for positive applications. To promote responsible use in virtual environments, we contribute to the advancement of detection for generative content by evaluating state-of-the-art detectors, highlighting potential areas for improvement, and suggesting solutions.
翻译:通过动态姿态与表情实现风格化虚拟形象的动画生成,因其广泛的应用前景而日益受到关注。先前研究通过训练可控生成模型,基于参考特征、姿态及表情条件合成动画,已取得显著进展。然而,这些方法中用于控制姿态与表情的机制,常会无意中引入目标动作中的非预期特征,同时导致表情相关细节的丢失,在风格化动画应用中尤为明显。本文提出一种基于Stable Diffusion的新方法,称为AniFaceDiff,其包含一个用于风格化虚拟形象动画生成的新型条件模块。首先,我们提出一种通过面部对齐实现的精细化空间条件方法,以防止引入目标动作中的身份特征。其次,我们引入一个表达式适配器,通过集成额外的交叉注意力层来解决表情信息可能丢失的问题。我们的方法在保持输入图像一致性的同时,有效保留了目标视频中的姿态与表情。大量实验表明,本方法取得了最先进的结果,展现出卓越的图像质量、参考特征保持能力及表情准确性,特别是在跨多样风格的非域内动画生成方面,凸显了其多功能性与强大的泛化能力。本工作旨在提升虚拟风格化动画的质量,以促进其积极应用。为倡导虚拟环境中的负责任使用,我们通过评估当前最优检测器、指出潜在改进领域并提出解决方案,为生成内容检测技术的发展做出贡献。