In our previous work, we introduced CosyVoice, a multilingual speech synthesis model based on supervised discrete speech tokens. By employing progressive semantic decoding with two popular generative models, language models (LMs) and Flow Matching, CosyVoice demonstrated high prosody naturalness, content consistency, and speaker similarity in speech in-context learning. Recently, significant progress has been made in multi-modal large language models (LLMs), where the response latency and real-time factor of speech synthesis play a crucial role in the interactive experience. Therefore, in this report, we present an improved streaming speech synthesis model, CosyVoice 2, which incorporates comprehensive and systematic optimizations. Specifically, we introduce finite-scalar quantization to improve the codebook utilization of speech tokens. For the text-speech LM, we streamline the model architecture to allow direct use of a pre-trained LLM as the backbone. In addition, we develop a chunk-aware causal flow matching model to support various synthesis scenarios, enabling both streaming and non-streaming synthesis within a single model. By training on a large-scale multilingual dataset, CosyVoice 2 achieves human-parity naturalness, minimal response latency, and virtually lossless synthesis quality in the streaming mode. We invite readers to listen to the demos at https://funaudiollm.github.io/cosyvoice2.
翻译:在我们先前的工作中,我们介绍了CosyVoice,一个基于有监督离散语音令牌的多语言语音合成模型。通过采用两种主流生成模型——语言模型(LMs)与流匹配——进行渐进式语义解码,CosyVoice在语音上下文学习中展现了高度的韵律自然性、内容一致性与说话人相似性。近期,多模态大语言模型(LLMs)取得了显著进展,其中语音合成的响应延迟与实时因子在交互体验中起着至关重要的作用。因此,在本报告中,我们提出了一种改进的流式语音合成模型CosyVoice 2,它融合了全面且系统化的优化。具体而言,我们引入了有限标量量化以提升语音令牌的码本利用率。对于文本-语音语言模型,我们精简了模型架构,使其能够直接使用预训练的大语言模型作为主干。此外,我们开发了一种分块感知的因果流匹配模型,以支持多种合成场景,实现在单一模型内同时支持流式与非流式合成。通过在大型多语言数据集上进行训练,CosyVoice 2在流式模式下实现了媲美人类的自然度、极低的响应延迟以及近乎无损的合成质量。我们诚邀读者访问 https://funaudiollm.github.io/cosyvoice2 试听演示样例。