We present a zero-shot framework for transferring human facial expressions to 3D animal face meshes. Our method combines intrinsic geometric descriptors (HKS/WKS) with a mesh-agnostic latent embedding that disentangles facial identity and expression. The ID latent space captures species-independent facial structure, while the expression latent space encodes deformation patterns that generalize across humans and animals. Trained only with human expression pairs, the model learns the embeddings, decoupling, and recoupling of cross-identity expressions, enabling expression transfer without requiring animal expression data. To enforce geometric consistency, we employ Jacobian loss together with vertex-position and Laplacian losses. Experiments show that our approach achieves plausible cross-species expression transfer, effectively narrowing the geometric gap between human and animal facial shapes.
翻译:本文提出一种零样本框架,用于将人类面部表情迁移至三维动物面部网格。我们的方法将本征几何描述符(HKS/WKS)与网格无关的潜在嵌入相结合,实现面部身份与表情的解耦。身份潜在空间捕捉与物种无关的面部结构,而表情潜在空间编码可泛化至人类与动物的形变模式。仅使用人类表情对进行训练,该模型即可学习跨身份表情的嵌入、解耦与重耦合,从而无需动物表情数据即可实现表情迁移。为增强几何一致性,我们采用雅可比损失并结合顶点位置损失与拉普拉斯损失。实验表明,我们的方法能够实现合理的跨物种表情迁移,有效缩小人类与动物面部形状间的几何差异。