Recent progress in large-scale zero-shot speech synthesis has been significantly advanced by language models and diffusion models. However, the generation process of both methods is slow and computationally intensive. Efficient speech synthesis using a lower computing budget to achieve quality on par with previous work remains a significant challenge. In this paper, we present FlashSpeech, a large-scale zero-shot speech synthesis system with approximately 5\% of the inference time compared with previous work. FlashSpeech is built on the latent consistency model and applies a novel adversarial consistency training approach that can train from scratch without the need for a pre-trained diffusion model as the teacher. Furthermore, a new prosody generator module enhances the diversity of prosody, making the rhythm of the speech sound more natural. The generation processes of FlashSpeech can be achieved efficiently with one or two sampling steps while maintaining high audio quality and high similarity to the audio prompt for zero-shot speech generation. Our experimental results demonstrate the superior performance of FlashSpeech. Notably, FlashSpeech can be about 20 times faster than other zero-shot speech synthesis systems while maintaining comparable performance in terms of voice quality and similarity. Furthermore, FlashSpeech demonstrates its versatility by efficiently performing tasks like voice conversion, speech editing, and diverse speech sampling. Audio samples can be found in https://flashspeech.github.io/.
翻译:近年来,大规模零样本语音合成在语言模型和扩散模型的推动下取得了显著进展。然而,这两种方法的生成过程缓慢且计算密集。如何在较低计算预算下实现媲美以往工作质量的语音合成仍是一项重大挑战。本文提出FlashSpeech——一种大规模零样本语音合成系统,其推理时间仅为以往工作的约5%。FlashSpeech基于隐式一致性模型,并采用一种新颖的对抗性一致性训练方法,该方法无需预训练扩散模型作为教师即可从头训练。此外,新增的韵律生成器模块增强了韵律的多样性,使语音节奏更加自然。FlashSpeech的生成过程可通过一到两步采样高效完成,同时保持高音频质量及与音频提示的高度相似性(用于零样本语音生成)。实验结果表明FlashSpeech具有卓越性能。值得注意的是,FlashSpeech在语音质量和相似性上保持可比性能的同时,速度比其他零样本语音合成系统快约20倍。此外,FlashSpeech还能高效执行语音转换、语音编辑和多样化语音采样等任务,展示了其多功能性。音频样本请见https://flashspeech.github.io/。