Generating 3D human gestures and speech from a text script is critical for creating realistic talking avatars. One solution is to leverage separate pipelines for text-to-speech (TTS) and speech-to-gesture (STG), but this approach suffers from poor alignment of speech and gestures and slow inference times. In this paper, we introduce FastTalker, an efficient and effective framework that simultaneously generates high-quality speech audio and 3D human gestures at high inference speeds. Our key insight is reusing the intermediate features from speech synthesis for gesture generation, as these features contain more precise rhythmic information than features re-extracted from generated speech. Specifically, 1) we propose an end-to-end framework that concurrently generates speech waveforms and full-body gestures, using intermediate speech features such as pitch, onset, energy, and duration directly for gesture decoding; 2) we redesign the causal network architecture to eliminate dependencies on future inputs for real applications; 3) we employ Reinforcement Learning-based Neural Architecture Search (NAS) to enhance both performance and inference speed by optimizing our network architecture. Experimental results on the BEAT2 dataset demonstrate that FastTalker achieves state-of-the-art performance in both speech synthesis and gesture generation, processing speech and gestures in 0.17 seconds per second on an NVIDIA 3090.
翻译:从文本脚本生成三维人体手势与语音对于创建逼真的对话化身至关重要。一种解决方案是分别采用文本到语音(TTS)和语音到手势(STG)的独立流程,但该方法存在语音与手势对齐不佳且推理速度慢的问题。本文提出FastTalker,这是一个高效且有效的框架,能够以高推理速度同时生成高质量语音音频和三维人体手势。我们的核心洞见在于复用语音合成过程中的中间特征用于手势生成,因为这些特征比从生成语音中重新提取的特征包含更精确的节奏信息。具体而言:1)我们提出一个端到端框架,通过直接利用音高、起始点、能量和时长等中间语音特征进行手势解码,同步生成语音波形与全身手势;2)我们重新设计了因果网络架构,以消除实际应用中对未来输入的依赖;3)我们采用基于强化学习的神经架构搜索(NAS),通过优化网络架构来同时提升性能与推理速度。在BEAT2数据集上的实验结果表明,FastTalker在语音合成和手势生成方面均达到了最先进的性能,在NVIDIA 3090上每秒钟的语音与手势处理时间仅为0.17秒。