Video-to-speech (V2S) synthesis, the task of generating speech directly from silent video input, is inherently more challenging than other speech synthesis tasks due to the need to accurately reconstruct both speech content and speaker characteristics from visual cues alone. Recently, audio-visual pre-training has eliminated the need for additional acoustic hints in V2S, which previous methods often relied on to ensure training convergence. However, even with pre-training, existing methods continue to face challenges in achieving a balance between acoustic intelligibility and the preservation of speaker-specific characteristics. We analyzed this limitation and were motivated to introduce DiVISe (Direct Visual-Input Speech Synthesis), an end-to-end V2S model that predicts Mel-spectrograms directly from video frames alone. Despite not taking any acoustic hints, DiVISe effectively preserves speaker characteristics in the generated audio, and achieves superior performance on both objective and subjective metrics across the LRS2 and LRS3 datasets. Our results demonstrate that DiVISe not only outperforms existing V2S models in acoustic intelligibility but also scales more effectively with increased data and model parameters. Code and weights can be found at https://github.com/PussyCat0700/DiVISe.
翻译:视频到语音(V2S)合成任务旨在直接从无声视频输入生成语音。由于仅凭视觉线索就需要准确重建语音内容和说话人特征,该任务本质上比其他语音合成任务更具挑战性。近期,视听预训练技术消除了V2S中对额外声学提示的依赖,而以往方法常依赖此类提示来确保训练收敛。然而,即使采用预训练,现有方法在实现声学可懂度与说话人特征保持之间的平衡方面仍面临挑战。我们分析了这一局限性,并由此提出DiVISe(直接视觉输入语音合成),这是一种端到端的V2S模型,可直接从视频帧预测梅尔频谱图。尽管未使用任何声学提示,DiVISe仍能有效保持生成音频中的说话人特征,并在LRS2和LRS3数据集上的客观和主观指标中均取得优异性能。我们的结果表明,DiVISe不仅在声学可懂度上优于现有V2S模型,还能随着数据和模型参数的增加实现更有效的扩展。代码与权重可在 https://github.com/PussyCat0700/DiVISe 获取。