Producing emotionally dynamic 3D facial avatars with text derived from spoken words (Emo3D) has been a pivotal research topic in 3D avatar generation. While progress has been made in general-purpose 3D avatar generation, the exploration of generating emotional 3D avatars remains scarce, primarily due to the complexities of identifying and rendering rich emotions from spoken words. This paper reexamines Emo3D generation and draws inspiration from human processes, breaking down Emo3D into two cascading steps: Text-to-3D Expression Mapping (T3DEM) and 3D Avatar Rendering (3DAR). T3DEM is the most crucial step in determining the quality of Emo3D generation and encompasses three key challenges: Expression Diversity, Emotion-Content Consistency, and Expression Fluidity. To address these challenges, we introduce a novel benchmark to advance research in Emo3D generation. First, we present EmoAva, a large-scale, high-quality dataset for T3DEM, comprising 15,000 text-to-3D expression mappings that characterize the aforementioned three challenges in Emo3D generation. Furthermore, we develop various metrics to effectively evaluate models against these identified challenges. Next, to effectively model the consistency, diversity, and fluidity of human expressions in the T3DEM step, we propose the Continuous Text-to-Expression Generator, which employs an autoregressive Conditional Variational Autoencoder for expression code generation, enhanced with Latent Temporal Attention and Expression-wise Attention mechanisms. Finally, to further enhance the 3DAR step on rendering higher-quality subtle expressions, we present the Globally-informed Gaussian Avatar (GiGA) model. GiGA incorporates a global information mechanism into 3D Gaussian representations, enabling the capture of subtle micro-expressions and seamless transitions between emotional states.
翻译:从语音文本生成具有情感动态的3D面部虚拟形象(Emo3D)一直是3D虚拟形象生成领域的关键研究课题。尽管通用3D虚拟形象生成已取得进展,但针对情感化3D虚拟形象生成的研究仍然匮乏,这主要源于从语音文本中识别并渲染丰富情感所面临的复杂性。本文重新审视Emo3D生成任务,借鉴人类情感处理过程,将Emo3D分解为两个级联步骤:文本到3D表情映射(T3DEM)与3D虚拟形象渲染(3DAR)。T3DEM是决定Emo3D生成质量的核心环节,涵盖三大关键挑战:表情多样性、情感-内容一致性以及表情流畅性。为应对这些挑战,我们提出了一个创新基准以推动Emo3D生成研究。首先,我们构建了EmoAva——一个面向T3DEM任务的大规模高质量数据集,包含15,000个文本到3D表情映射样本,系统表征了上述Emo3D生成中的三大挑战。此外,我们开发了多维度评估指标,以针对这些挑战对模型进行有效评估。其次,为在T3DEM步骤中有效建模人类表情的一致性、多样性与流畅性,我们提出了连续文本到表情生成器,该模型采用自回归条件变分自编码器生成表情编码,并融合潜在时序注意力与表情级注意力机制进行增强。最后,为在3DAR步骤中进一步提升高质量细微表情的渲染效果,我们提出了全局感知高斯虚拟形象(GiGA)模型。GiGA将全局信息机制融入3D高斯表征,使其能够捕捉细微的微表情并实现情感状态间的平滑过渡。