Talking head generation is increasingly important in virtual reality (VR), especially for social scenarios involving multi-turn conversation. Existing approaches face notable limitations: mesh-based 3D methods can model dual-person dialogue but lack realistic textures, while large-model-based 2D methods produce natural appearances but incur prohibitive computational costs. Recently, 3D Gaussian Splatting (3DGS) based methods achieve efficient and realistic rendering but remain speaker-only and ignore social relationships. We introduce RSATalker, the first framework that leverages 3DGS for realistic and socially-aware talking head generation with support for multi-turn conversation. Our method first drives mesh-based 3D facial motion from speech, then binds 3D Gaussians to mesh facets to render high-fidelity 2D avatar videos. To capture interpersonal dynamics, we propose a socially-aware module that encodes social relationships, including blood and non-blood as well as equal and unequal, into high-level embeddings through a learnable query mechanism. We design a three-stage training paradigm and construct the RSATalker dataset with speech-mesh-image triplets annotated with social relationships. Extensive experiments demonstrate that RSATalker achieves state-of-the-art performance in both realism and social awareness. The code and dataset will be released.
翻译:说话头生成在虚拟现实(VR)中日益重要,尤其对于涉及多轮对话的社交场景。现有方法面临显著局限:基于网格的3D方法虽能建模双人对话,但缺乏逼真纹理;而基于大模型的2D方法能生成自然外观,但计算成本过高。近期,基于3D高斯溅射(3DGS)的方法实现了高效逼真的渲染,但仍局限于单人说话且忽略了社交关系。我们提出RSATalker,这是首个利用3DGS实现逼真且社交感知的说话头生成框架,并支持多轮对话。我们的方法首先从语音驱动基于网格的3D面部运动,随后将3D高斯绑定至网格面片以渲染高保真2D化身视频。为捕捉人际动态,我们提出一个社交感知模块,通过可学习的查询机制将社交关系(包括血缘与非血缘、平等与不平等)编码为高层嵌入。我们设计了三阶段训练范式,并构建了带有社交关系标注的语音-网格-图像三元组数据集RSATalker。大量实验表明,RSATalker在真实感与社交感知方面均达到最先进性能。代码与数据集将公开。